Is it possible to use a btCompoundShape as a child shape of a different btCompoundShape? - bullet

The question is fairly self explanatory. I am left with the need for a single btRigidBody made up of an incredibly large number of btBoxShape primitives. The way my program is written currently lends itself rather well to the setup I described in the question where there are multiple btCompoundShape objects that contain these btBoxShape primitives, and all of them can feed into one over arching btCompoundShape which is the shape that would then by used by the btRigidBody.
Unfortunately, it will take some time to implement, and I am hoping to have a yes or no answer before I begin so that I can go about other means if necessary. That being said, if no answer is forthcoming I will go ahead anyway and answer my own question here after attempting the implementation myself.

Yes, you can add a btCompoundShape child shape to a btCompoundShape: it allows recursion.

Related

Write fast Common Lisp code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm not sure, if some weird things make my Code faster:
Is it normally better to use inbuilt operations or write new specialized functions, that do the same thing?
(for example a version of #'map only for vectors; my version is often faster without type declarations)
Should I define new (complicated) types to use them in declarations?
(for example a typed list)
Should I define slots directly to an object? (for example px and py for a 2-dimensional object, or is it ok to use one slot pos of type vector, that I could reuse it for other purposes)
There are a few parts to this but here is a quick braindump
PROFILE!
Use a distribution of CL that has a profiler built in, I use sbcl for example http://www.sbcl.org/1.0/manual/Statistical-Profiler.html
The nice thing about the sbcl profiler is that once you have profiled a function, if you disassemble it, the machine code is annotated with statistics. This requires some knowledge of the target machine code.
Do not underestimate your implementation: They can have advanced type and flow analysis built in and are able to, for example, pick a vector only version of map when it makes sense.
Learn compiler macros: compiler macros can shadow functions this gives you a place to put extra optimizations based on the context of the form. IT does this without replacing the function so it can still be used in a higher order way.
Learn Type declarations
I found this series of blog posts helped me understand this technique http://nklein.com/tags/optimization/page/2/ Read em all!
ONE MASSIVE NOTE: Don't ever lie to your compiler about a type. Type declarations are a way of telling your compiler you know what the type is the compiler doesn't even have to use them, and when it does it doesn't have to check you are giving it the correct thing.
Unboxed data
Some implementations are able to unbox certain datatype in certain conditions. Sorry that is vague but you will need to read up for your implementation. For sbcl the 'sbcl internals' guide is very helpful.
For example:
(make-array 100 :element-type 'single-float :initial-element 0.0)
Can be stored as a contiguous block of memory in sbcl.
PROFILE AGAIN (With realistic data)
I spent 3 hours writing a crazy compiler macro based n dimensional matrix multiplication routine and then tested it against a 1 line built in solution. For matricies below 5 dimensions there was not a big difference! For higher dimensions, yeah It rocked but that 'performance benefit' is purely academic because those code paths were never touched. Luckily I undertook the task for fun as I was asking the same question you are now.
Algorithms
All the type specifiers in the world won't give you a 100times performance increase. This comes from better techniques. Read on the maths behind the problem, implements different helper functions that have different strengths and choose between them at runtime...then go back and use compiler macros to allow lisp to choose at compile time. OR specify the technique as a higher order functions, for example make-hash-table allows you to specify the hashing function and rehash sizes, this can be crucial in getting good performance at certain sizes.
Know the limits of BigO
Algorithmic complexity means nothing if you loose all the of performance due to memory locality issues. Conversly sometime we can achieve superlinear performance characteristics if, by spliting the problem among cores, the reduced dataset now fits in the l2 cache.
BigO is a great metric but it isn't the end of the story. This is the reason assoc lists are a totally valid alternative to hash-tables for low numbers of keys and certain access profiles.
Summary
There is a golden mantra I heard from somewhere in the lisp community that works so well:
Make it Fast and then make it Fast
If nothing else follow this. Chant it to yourself!
Get the program up and running quickly, in doing so you are more likely to spot the places where you can use a better technique or algorithm to get your several-orders-of-magnitude improvement. Do use CL's own functions first. Don't trade lisp's higher order nature too early by using macros, explore how far you can go with functions.
[Edit] More notes - the following is for sbcl
Type definitions on struct slots are used for optimizing, type declarations for class slots are not.
With regard to types, start with what makes the program easy to write and understand (Make it fast) and then look into access times if it is the bottleneck (make It Fast!)
(slot-value x 'name) is very fast when name is known. Look at how with-slots uses symbol-macrolet to it's advantage
So to kinda directly answer your original question:
built in first (also check libraries)
does it make the problem easier to write and understand?
use pos. By the time the performance of that indirection becomes and issue you will have found a dozen other ways to speed up the problem and the solution will be part of a wider optimization technique.

How do you document your Less? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I use JSDoc to document all the JS I write, but I am curious about how people document their less. I guess I could use JSDoc, but it doesn't seem right since less is not JS. I also want to avoid documenting my less with JSDoc if there is a standard way that might allow different IDEs to provide tooling support.
Does anyone know of a standard way of documenting less?
Documenting source code involves different issues, and may have different objectives, than documenting CSS, be it LESS or any other variant.
Source code involves classes and methods involving contracts, such as the types and meaning of parameters and return values. It also may have complex logic that requires explanation, or handle multiple conditions, or deal with various edge cases. It may be implementing an API which third parties will consume, which should have its own stand-along documentation that can be "read". Systems such as JSDoc are designed with all this in mind. People reading the code can easily understand the purpose and signature and logic of the various routines, and the comments can be processed into API documents.
In a similar vein, source code is typically organized logically into a hierarchy of modules and classes. When reading the documentation, it's common to want to jump from a description of a subclass to the description of its superclass, or up to the module level. Tools like JSDoc also make this easy, by spitting out sets of interlinked HTML pages, most often.
On the other hand, consider a library such as Underscore, to which only some parts of the above apply. There are no modules, or classes, or class hierarchies. Instead, it is a bag of tools. Therefore, there is really no need for a lot of JSDoc-like machinery. Instead, what I want to do is to be able to READ the code and easily see what's happening, or get a narrative about the functions provided, probably with some code examples. That's why they use Docco, as recommended by a commenter. It's perfect for that. And as the commenter also mentioned, it can be used with almost any programming language, including CSS.
Compared to "languages" like JavaScript, CSS is (typically) flatter, and does not have the notion of "contracts" of parameters and return values, nor complex computations, although in systems like LESS of course you have mix-ins and calculations. With CSS, you also have the situation that in many cases the effect of the CSS is something visual, like say a button colored a certain way with text of a certain size. We have two potential consumers of comments in CSS: the programmer who is actually looking at the CSS code, and the UI designer or implementor who wants to know what styles are defined and check how they work.
Personally, I would adopt two approaches here, mapped to the two types of consumers. In the CSS code itself, I'd simply comment narratively, describing the purpose and structure of the rule. Parallel to that, I'd build a separate "styleguide" site, which contains visual examples of all the styles. There have been various attempts to automate the creation of such styleguides, with varying degrees of success. I have not used them, so cannot say how useful they might be. Personally, I'd go with a hand-rolled style guide.
It's also worth pointing out that the only thing worse than no documentation is wrong documentation. Whatever documentation approach you take, you have to make sure it's really sustainable and maintainable. In that sense, simpler is better.
Finally, let me note that the need for extensive documentation is inversely proportional to how well designed a set of styles and classes is. There is not much point in papering over baroque designs with poorly factored classes, weird dependencies, and poor naming, with lots of documentation. Instead, you might want to focus on refactoring your CSS so it's at least a bit more self-documenting.

In a paragraph or less, what is the purpose and benefits of pointers?

See title. That's all I have to ask. The net doesn't have many succinct answers to this question. Please keep in mind stack vs heap. Explain as you would to a complete beginner. Just looking for the "why" not the "how".
edit
Are pointers a way to get large objects out of the stack?
When passing a huge object from one piece of your program to another to be worked on like an entire class for example or something with a large amount of data like an image or video passing every single bit of data would be very inefficient. Instead you can just pass a tiny little memory address (pointer) that the receiving part of your program can then use to get to the object to be worked on.
Aside from that huge aspect, they offer a lot of flexibility but I need more than a paragraph for that.
When you get into managed code like C# or Java EVERYTHING is done with pointers/references but it's all behind the scenes and you don't have to deal with them like you would in C++ or another similar language. But it's still crucial to understand how they work.
Edit in response to:
"why would I pass a large object around if I don't need to work on
it?"
You wouldn't. However; Correct me if I'm straying from what your asking but what you'll learn if you continue into Computer Science is that a piece of your program should be as simple as possible it should only do 1 thing. Commonly known as the Single Responsibility Principle this dictates that you will have many seemingly tiny parts of your program that will all work together to accomplish the over arching goal. That means that a lot of those tiny pieces are going to need to work on the same objects, the same data and use the same tools to get the job done. Lets look at a hypothetical.
You're coding a simple image editing application.You're going to need a cropping tool, a paint brush tool, a selection tool, and a re-size tool. Each of these tools are going to need their own place in your program (a class or more likely many classes that work together) and that class will have many smaller pieces (methods/functions and other things) that work together to accomplish the goal of that class. Every single one of these classes and methods is most likely going to need to look at or modify the image data. With a pointer you can provide them with a memory address instead of making an entire copy of the image. That way when one of the classes or methods makes a change to it you don't need to worry about managing all these copies and making sure they all get the same change.
It allows you to do pass-by-reference/shared data structures, which has two big features: it saves memory and CPU overhead by not making copies, and it provides for complex communication patterns by making changes to shared data.

Should I use an expression parser in my Math game?

I'm writing some children's Math Education software for a class.
I'm going to try and present problems to students of varying skill level with randomly generated math problems of different types in fun ways.
One of the frustrations of using computer based math software is its rigidity. If anyone has taken an online Math class, you'll know all about the frustration of taking an online quiz and having your correct answer thrown out because your problem isn't exactly formatted in their form or some weird spacing issue.
So, originally I thought, "I know! I'll use an expression parser on the answer box so I'll be able to evaluate anything they enter and even if it isn't in the same form I'll be able to check if it is the same answer." So I fire up my IDE and start implementing the Shunting Yard Algorithm.
This would solve the problem of it not taking fractions in the smallest form and other issues.
However, It then hit me that a tricky student would simply be able to enter most of the problems into the answer box and my expression parser would dutifully parse and evaluate it to the correct answer!
So, should I not be using an expression parser in this instance? Do I really have to generate a single form of the answer and do a string comparison?
One possible solution is to note how many steps your expression evaluator takes to evaluate the problem's original expression, and to compare this to the optimal answer. If there's too much difference, then the problem hasn't been reduced enough and you can suggest that the student keep going.
Don't be surprised if students come up with better answers than your own definition of "optimal", though! I was a TA/grader for several classes, and the brightest students routinely had answers on their problem sets that were superior to the ones provided by the professor.
For simple problems where you're looking for an exact answer, then removing whitespace and doing a string compare is reasonable.
For more advanced problems, you might do the Shunting Yard Algorithm (or similar) but perhaps parametrize it so you could turn on/off reductions to guard against the tricky student. You'll notice that "simple" answers can still use the parser, but you would disable all reductions.
For example, on a division question, you'd disable the "/" reduction.
This is a great question.
If you are writing an expression system and an evaluation/transformation/equivalence engine (isn't there one available somewhere? I am almost 100% sure that there is an open source one somewhere), then it's more of an education/algebra problem: is the student's answer algebraically closer to the original expression or to the expected expression.
I'm not sure how to answer that, but just an idea (not necessarily practical): perhaps your evaluation engine can count transformation steps to equivalence. If the answer takes less steps to the expected than it did to the original, it might be ok. If it's too close to the original, it's not.
You could use an expression parser, but apply restrictions on the complexity of the expressions permitted in the answer.
For example, if the goal is to reduce (4/5)*(1/2) and you want to allow either (2/5) or (4/10), then you could restrict the set of allowable answers to expressions whose trees take the form (x/y) and which also evaluate to the correct number. Perhaps you would also allow "0.4", i.e. expressions of the form (x) which evaluate to the correct number.
This is exactly what you would (implicitly) be doing if you graded the problem manually -- you would be looking for an answer that is correct but which also falls into an acceptable class.
The usual way of doing this in mathematics assessment software is to allow the question setter to specify expressions/strings that are not allowed in a correct answer.
If you happen to be interested in existing software, there's the open-source Stack http://www.stack.bham.ac.uk/ (or various commercial options such as MapleTA). I suspect most of the problems that you'll come across have also been encountered by Stack so even if you don't want to use it, it might be educational to look at how it approaches things.

QAbstractItemModel.parent(), why?

I'm a (Py)Qt newbie, porting C# GUI code to Qt for a couple of days now. One question that I keep asking myself is why are QAbstractItemModel subclasses required to supply a parent() method, and why are they required to supply, in the resulting QModelIndex, the row of a child in the parent?
This requirement forces me to add another layer over my tree data (because I don't want to call indexOf(item) in parent(), it wouldn't be very efficient) that remembers row indexes.
I ask this because it's the first time I see a model based view require this. For example, NSOutlineViewDataSource in Cocoa doesn't require this.
Trolltech devs are smart people, so I'm sure there's a good reason for this, I just want to know what reason.
The quick answer is, "they thought it best at the time." The Qt developers are people just like you and me -- they aren't perfect and they do make mistakes. They have learned from that experience and the result is in the works in the form of Itemviews-NG.
In their own words from the link above:
Let’s just say that there is room for improvement, lots of room!
By providing a parent that contains a row and column index, they provide one possible way to implement trees and support navigation. They could just as easily have used a more obvious graph implementation.
The requirement is primarily to support trees. I couldn't tell you the reason, since I'm not a Qt dev... I only use the stuff. However, if you aren't doing trees, you could probably use one of the more-tuned model classes and not have to deal with the overhead of supplying a parent. I believe that both QAbstractListModel and QAbstractTableModel handle the parent portion themselves, leaving you free to just worry about the data you want.
For trees, I suspect that one of the reasons they need the parent is that they try to keep to only asking for the information they need to draw. Without knowing all of the items in a tree (if it wasn't expanded, for example), it becomes much harder to provide an absolute position of a given item in a tree.
As for the quandry of using indexOf(item) in the parent function, have you considered using QModelIndex's internalId or internalPointer? I'm assuming they are available in PyQt... they can be used by your model to track things about the index. You might be able to use that to shortcut the effort of finding the parent's index.

Resources