I read in the manual and info pages the sections about optimization levels in the find command and I cannot understand why I should not use the most aggressive optimization level.
The only relevant sentences I found was (from the man find version 4.4.2):
Conversely, optimisations that prove to be reliable, robust and effective may be enabled at lower optimisation levels over time.
The findutils test suite runs all the tests on find at each optimisation level and ensures that the result is the same.
If I understood well, it's about proofing the right behaviour of find through findutils but, this test suit ensures that all otimization levels are giving the same result.
You're missing this sentence:
The cost-based optimiser has a fixed idea of how likely any given test is to succeed.
That means if you have a directory with highly atypical contents (e.g. a lot of named pipes and very few "regular" files), the optimizer may actually worsen the performance of your query (in this case by assuming -type f is more likely to succeed than -type p when the reverse is true). In a situation like this, you're better off hand-optimizing it, which is only really possible at -O1 or -O2.
Even ignoring this issue, the fixed costs of the cost based optimizer are difficult to get right. There are multiple pieces of hardware and software involved (the hard disk, the kernel, the filesystem) which all do some caching and optimization of their own. As a result, it is very hard to predict how expensive different operations will be, even relative to one another (e.g. we know readdir(2) is cheaper than stat(2), but we don't know how much cheaper). This means cost-based optimization is not always guaranteed to produce the best optimization even assuming typical filesystem contents. The lower optimization levels allow you to hand-tune your query by trial and error, which may be more reliable, if more laborious.
Related
I am learning FP and got introduced to the concept of property-based testing and for someone from OOP world PBT looks both useful and dangerous. It does check a lot of options, but what if there is one (or some) options that fail, but they didn't fail during your first let's say Jenkins build. Then next time you run the build the test may or may not fail, doesn't it kill the entire idea of repeatable builds?
I see that some people explored options to make the tests deterministic, but then if such test doesn't catch an error it will never catch it.
So what's better approach here? Do we sacrifice build repeatability to eventually uncover a bug or do we take the risk of never uncovering it, but get our repeatability back?
(I hope that I properly understood the concept of PBT, but if I didn't I would appreciate if somebody could point out my misconceptions)
Doing a lot of property-based testing I don’t see indeterminism as a big problem. I basically experience three types of it:
A property is really indeterministic b/c some external factor - e.g. timeout, delay, db config - makes it so. Those flaky tests also show up in example-based testing and should be eliminated by making the external factor deterministic.
A property fails rarely because the triggering condition is only sometimes met by pseudo random data generation. Most PBT libraries have ways to reproduce those failing runs, eg by re-using the random seed of the failing test run or even remembering the exact constellation in a database of some sort. Those failures reveal problems and are one of the reasons why we’re doing random test cases generation in the first place.
Coverage assertions („this condition will be hit in at least 5 percent of all cases“) may fail from time to time even though they are generally true. This can be mitigated by raising the number of tries. Some libs, eg quickcheck, do their own calculation of how many tries are needed to prove/disprove coverage assumptions and thereby mostly eliminate those false positives.
The important thing is to always follow up on flaky failures and find the bug, the indeterministic external factor or the wrong assumption in the property‘s invariant. When you do that, sporadic failures will occur less and less often. My personal experience is mostly with jqwik but other people have been telling me similar stories.
You can have both non-determinism and reproducible builds by generating the randomness outside the build process. You could generate it during development or during external testing.
One example would be to seed your property based tests, and to automatically modify this seed on commit. You're still making a tradeoff. A developer could be alerted of a bug unrelated to what they're working on, and you lose some test capacity since the tests might change less often.
You can tip the tradeoff further in the deterministic direction by making the seed change less often. You could for example have one seed for each program component or file, and only change it when a related file is committed.
A different approach would be to not change the seed during development at all. You would instead have automatic QA doing periodic or continuous testing with random seeds and use them to generate bug reports/issues that can be dealt with when convenient.
johanneslink's analysis of non-determinism is spot on.
There's one thing I would like to add: non-determinism is not only a rare and small cost, it's also beneficial. If the first run of your test suite is successful, insisting on determinism means insisting that future runs (of the same suite against the same system) will find zero bugs.
Usually most test suites contain many independent tests of many independent system parts, and commits rarely change large parts of the system. So even across commits, most tests test exactly the same thing before and after, where once again determinism guarantees that you will find zero bugs.
Allowing for randomness means every run has at least a chance of discovering a bug.
That of course raises the question of regression tests. I think the standard argument is something like this: to maximize value per effort you should focus your testing on the most bug-prone parts of the code. Having observed a bug in the past provides evidence about which part of the code is buggy (and which kind of bug it's likely to have). You should use that evidence to guide your testing effort. (Often with a laser-like focus on one concrete bug.)
I think this is a very reasonable argument. I also think there's more than one way of making good use of the evidence provided by bugs.
For example, you might write a generator which produces data of the same kind and shape as the data which triggered the bug the first time, and/or which is tailor made to trigger the bug.
And/or, you might want to write tests verifying specifically those properties that were violated by the buggy behavior.
If you want to judge how good these tests are, I recommend running them a couple of times (on normally sized input batches). If they trigger the bug every time, it's likely to do so in the future also.
Here's a (hopefully thought-)provoking question: is it worse to release software which has a bug it has had before, or release software with new bugs? In other words: is catching past bugs more important than catching new ones—or do do it primarily because it's easier?
If you think we do it in part because it's easier, then I don't think it matters that re-catching the bug is probabilistic: what you should really care about is something like the average bug-catching abilities of property testing—its benefits elsewhere should outweigh the fairly small chance that an old bug squeaks through, even though it got caught in (say) 5 consecutive runs of the tests when you evaluated your regression tests.
Now, if you can't reliably generate random inputs that trigger the bug even though you understand the bug just fine, or the generator which does it is large and complicated and thus costly to maintain, hand-picking a regression example seems like a perfectly reasonable choice.
Does Frama-C provide any tools for proving the run-time characteristics of a function such as execution time (possibly as instruction count) and heap memory space (counted as bytes allocated)?
Concerning execution time estimation
Frama-C works at the C level. The Metrics plug-in can provide a few metrics (such as statement count) on a version of the source very close to the original one (-metrics -metrics-ast cabs), or on the normalized source (often referred to as Cil code) that it uses. However, it does not have any knowledge of assembly code, therefore it cannot provide precise information about execution time at this level.
Since compiler optimizations impact code generation, the numbers given by Frama-C may or may not be close to what will be produced by a compiler, depending on which optimizations are enabled, what is known about the compiler and the target architecture, etc. In the general case, Frama-C cannot give any guarantees; in specific situations, it is possible to develop plug-ins to provide some of this information (e.g. the Cost plug-in, mentioned here uses annotations to try and maintain some correspondence between source and compiled code, and then uses them to provide some execution time information).
Concerning memory size estimation
There is an option, -metrics-locals-size, which does a rough estimation of the stack memory usage by a function. As in the previous case, this is only an estimation based on the source code. Compilers are likely to stack-allocate temporary variables for computing temporary subexpressions, or for register spilling, so the numbers given by Frama-C cannot be used in a worst-case stack estimation.
Dynamic memory allocation is supported in ACSL, so in theory it is possible to write annotations concerning it. However, current plug-ins do not provide a direct way to handle this precisely; it might require writing a new plug-in or, at least, an abstract domain for Eva.
Eva currently handles dynamic allocation, but probably not precisely enough for estimating heap size in an interesting way. It is possible to develop an abstract domain for Eva that would keep track of this information (adding mallocs and subtracting frees) and compute an overapproximation of the heap memory space, but this would require being able to bound the number of iterations of loops containing allocations (otherwise the upper bound would be infinite). Precision would depend on the complexity of the program.
For runtime verification, the E-ACSL plug-in already tracks some stack/heap information usage (even though it is not currently exported to the user), so in theory one could write an assertion similar to //# assert heap_size <= \old(heap_size) + 42;, and have it checked at runtime, when running the instrumented program.
To complement anol's answer, the PathCrawler plug-in (online version can be used freely, but the plugin itself is proprietary) has been used to generate sets of test cases covering all paths of C functions. This article explains under which assumptions this can be used as the basis for WCET measurement, but basically the issues are the one already mentioned by anol: without a precise knowledge of the work done by the compiler and of the underlying hardware, which is not something Frama-C provides natively, things are going to be quite rough.
There has been apparently some recent work taking the same route of using PathCrawler for generating execution traces covering a sufficiently large proportion of the search space as a bachelor project in Amsterdam.
I'm giving TDD a serious try today and have found it really helpful, in line with all the praise it receives.
In my quest for exercises on which to learn both Python and TDD i have started to code SPOJ exercises using the TDD technique and i have arrived at a question:
Given that all of SPOJ's exercises are mostly math applied to programming; How does one test a math procedure as in the TDD Fashion? Sample known-to-be-correct data? Test against a known implementation?
I have found that using the sample data given in the problem itself is valuable but it feels overkill for something you can test so quickly using the console, not to mention the overhead to design your algorithms in a testable fashion (Proxying the stdout and stdin objects is nothing short of too much work for a really small reward), and while it is good because it forces you to think your solutions in testable terms i think i might be trying way too hard on this.
All guidance is welcome
test all edge cases. Your program is most likely to fail when the input is special for some reason: negative, or zero values, very large values, inputs in reverse order, empty inputs. You might also want some destructive tests to see how large the inputs can be before things break or grind to a halt.
Sphere online Judge may not be the best fit of TDD. For one the input data might be better behaved than what a real person might put in. Secondly there is a code size limit on some problems. An extensive test-suite might put you over that limit.
You might want to take a look at Uncle Bob's "Transformation Priority Premise." It offers some guidance on how to pick a sequence of tests to test drive an algorithm.
Use sample inputs for which you know the results (outputs). Use equivalence partitioning to identify a suitable set of test cases. With maths code you might find that you can not implement as incrementally as for other code: you might need several test cases for each incremental improvement. By that I mean that non maths code can typically be thought of as having a set of "features" and you can implement one feature at a time, but maths code is not much like that.
Everyone is aware of Dijkstra's Letters to the editor: go to statement considered harmful (also here .html transcript and here .pdf). I was wondering is anyone attempted to find a way to make code using goto's re-usable and maintainable and not-harmful by adding any other language extensions or developing a language which allows for gotos.
The reason I ask the question is that it occurs to me that code written in Assembly language often used goto's and global variables to make the program work well within a limited space. The Atari 2600 which had 128 bytes of ram and the program was loaded from ROM cartridge. In this case, it was better to use unstructured programming and to make the most of the freedoms this allows to make the most of a very limited space for the program.
When you compare this with a game programmed today without the use of gotos, the game takes up much more space.
Then it occurs to me that perhaps its possible to program with the use of gotos if some rules or other language changes are made to support this, then the negative effects of gotos could be reduced or eliminated. Has anyone tried to find a way to make goto's NOT considered harmful by creating a language or some rules to follow which allow gotos to be not harmful.
If no-one looked for a way to use gotos in a non-harmful way then perhaps we adopted structured programming un-necessarily based solely on this paper? Perhaps there is another solution which allows for the use of gotos without the down-side.
Comparing gotos to structured programming is comparing a situation where the programmer has to remember what every labels in the code actually mean and do, and where there are, to a situation where the conditional branches are explicitly described.
As of the advantages of the goto statement regarding the place a program might take, I think that games today are big because of the graphic and sound resources they use. That is, show 1,000,000 polygons. The cost of a goto compared to that is totally neglectable.
Moreover, the structural statements are ultimately compiled into goto ("jmp") statements by the compiler when outputting assembly.
To answer the question, it might be possible to make goto less harmful by creating naming and syntax conventions. Enforcing these conventions into rules is however pretty much what structural programming does.
Linus Torvald argued once that goto can make source code clearer, but goto is useful in so very special cases that I would not dare use it as a programmer.
This question is somehow related to yours, since I think this one of the most common situations where a goto is needed.
Assuming that I am interested in performance rather than portability of my linear algebra iterative multi-threaded solver and that I have the results of profiling my code in hand, how do I go about tuning my code to run optimally on that machine of my choice?
The algorithm involves Matrix-Vector multiplications, norms and dot-products. (FWIW, I am working on CG and GMRES).
I am working on codes which are of matrix size roughly equivalent to the full size of the RAM (~6GB). I'll be working on Intel i3 Laptop. I'll be linking my codes using Intel MKL.
Specifically,
Is there a good resource(PDF/Book/Paper) for learning manual tuning? There are numerous things that I learnt by doing for instance : Manual Unrolling isn't always optimal or about compiler flags but I would prefer a centralized resource.
I need something to translate profiler information to improved performance. For instance, my profiler tells me that my stacks of one processor are being accessed by another or that my mulpd ASM is taking too much time. I have no clue what these mean and how I could use this information for improving my code.
My intention is to spend as much time as needed to squeeze as much compute power as possible. Its more of a learning experience than for actual use or distribution as of now.
(I am concerned about manual tuning not auto-tuning)
Misc Details:
This differs from usual performance tuning since the major portions of the code are linked to Intel's proprietary MKL library.
Because of Memory Bandwidth issues in O(N^2) matrix-vector multiplications and dependencies, there is a limit to what I could manage on my own through simple observation.
I write in C and Fortran and I have tried both and as discussed a million times on SO, I found no difference in either if I tweak them appropriately.
Gosh, this still has no answers. After you've read this you'll still have no useful answers ...
You imply that you've already done all the obvious and generic things to make your codes fast. Specifically you have:
chosen the fastest algorithm for your problem (either that, or your problem is to optimise the implementation of an algorithm rather than to optimise the finding of a solution to a problem);
worked your compiler like a dog to squeeze out the last drop of execution speed;
linked in the best libraries you can find which are any use at all (and tested to ensure that they do in fact improve the performance of your program;
hand-crafted your memory access to optimise r/w performance;
done all the obvious little tricks that we all do (eg when comparing the norms of 2 vectors you don't need to take a square root to determine that one is 'larger' than another, ...);
hammered the parallel scalability of your program to within a gnat's whisker of the S==P line on your performance graphs;
always executed your program on the right size of job, for a given number of processors, to maximise some measure of performance;
and still you are not satisfied !
Now, unfortunately, you are close to the bleeding edge and the information you seek is not to be found easily in books or on web-sites. Not even here on SO. Part of the reason for this is that you are now engaged in optimising your code on your platform and you are in the best position to diagnose problems and to fix them. But these problems are likely to be very local indeed; you might conclude that no-one else outside your immediate research group would be interested in what you do, I know you wouldn't be interested in any of the micro-optimisations I do on my code on my platform.
The second reason is that you have stepped into an area that is still an active research front and the useful lessons (if any) are published in the academic literature. For that you need access to a good research library, if you don't have one nearby then both the ACM and IEEE-CS Digital Libraries are good places to start. (Post or comment if you don't know what these are.)
In your position I'd be looking at journals on 2 topics: peta- and exa-scale computing for science and engineering, and compiler developments. I trust that the former is obvious, the latter may be less obvious: but if your compiler already did all the (useful) cutting-edge optimisations you wouldn't be asking this question and compiler-writers are working hard so that your successors won't have to.
You're probably looking for optimisations which like, say, loop unrolling, were relatively difficult to find implemented in compilers 25 years ago and which were therefore bleeding-edge back then, and which themselves will be old and established in another 25 years.
EDIT
First, let me make explicit something that was originally only implicit in my 'answer': I am not prepared to spend long enough on SO to guide you through even a summary of the knowledge I have gained in 25+ years in scientific/engineering and high-performance computing. I am not given to writing books, but many are and Amazon will help you find them. This answer was way longer than most I care to post before I added this bit.
Now, to pick up on the points in your comment:
on 'hand-crafted memory access' start at the Wikipedia article on 'loop tiling' (see, you can't even rely on me to paste the URL here) and read out from there; you should be able to quickly pick up the terms you can use in further searches.
on 'working your compiler like a dog' I do indeed mean becoming familiar with its documentation and gaining a detailed understanding of the intentions and realities of the various options; ultimately you will have to do a lot of testing of compiler options to determine which are 'best' for your code on your platform(s).
on 'micro-optimisations', well here's a start: Performance Optimization of Numerically Intensive Codes. Don't run away with the idea that you will learn all (or even much) of what you want to learn from this book. It's now about 10 years old. The take away messages are:
performance optimisation requires intimacy with machine architecture;
performance optimisation is made up of 1001 individual steps and it's generally impossible to predict which ones will be most useful (and which ones actually harmful) without detailed understanding of a program and its run-time environment;
performance optimisation is a participation sport, you can't learn it without doing it;
performance optimisation requires obsessive attention to detail and good record-keeping.
Oh, and never write a clever piece of optimisation that you can't easily un-write when the next compiler release implements a better approach. I spend a fair amount of time removing clever tricks from 20-year old Fortran that was justified (if at all) on the grounds of boosting execution performance but which now just confuses the programmer (it annoys the hell out of me too) and gets in the way of the compiler doing its job.
Finally, one piece of wisdom I am prepared to share: these days I do very little optimisation that is not under one of the items in my first list above; I find that the cost/benefit ratio of micro-optimisations is unfavourable to my employers.