Low framerate on android using immediateLayer (and weird playn log message) - playn

I am trying to get my game running smoothly on android using PlayN, everything works as expected but I have a verry low framerate.
I think i might be doing it the wrong way so it'd be great if someone could give me some help :D.
So I'm currently creating an ImmediateLayer and cleaning it and drawing every objet at every frame, images are loaded once using graphics.getImage(...). I get around 10 fps on a HTC Desire (connected in debug mode) while drawing around 100 images. It gets slower as I add objetcs.
What makes me think I am doing something really wrong is that PlayN is spamming the log (I use DDMS to check the log) with these messages.
X Textures remaining
Y Textures created
It appears around 20 times per second.
So basically i am doing something like this:
public void draw(Surface s){
for (Drawable d : toDraw) {
surface.drawImage(d.getImage(), d.x, d.y, d.w, d.h); }}
Where draw is called in the renderer of an immediateLayer.
thank you for your time!
Lucas
Edit:
I found a possible answer reading that thread on playn mailing list.
https://groups.google.com/forum/?fromgroups#!topic/playn/XJTlBgmfzaQ
especially that message
Most of the tiles share the same image, but different parts of it (atteined
by subImage, which to my knowledge doesn't copy but rather only creates a
reference to a sub part to the image?). Would different sub parts cause a
texture change?
Sub-images will share the same texture, but you have to be sure to
render all the images that share the same texture together. If you
have a bunch of sub images from image A and a bunch from image B and
you render A, B, A, B, A, B, A, B, you are getting same performance as
if you rendered eight different images. You need to render A, A, A, A,
B, B, B, B.
It seems I am in a worst case scenario.
I have many different textures (say 50), They're all part of a different image and all rendered randomly. I may end up with something close to N texture switch where N is the number of objet i render.
I'll update this when i get something to work :).
Edit 2:
I am now rendering 100 objet pointing to the same Image (so no texture swap?). I still get about 10 fps on android and the debug log is still saying in each frame:
1 textures remain
2 textures created
I feel like i'm missing something important... help :).

After some deeper testing I found that :
1) The htc desire cannot be expected to get 60 fps drawing hundreds of objects even sharing the same image.
2) imageLayer attached to a group layer seems to provide better performance on android
3) Calling surface.clear() at the beginning of the renderer callback (using immediate layer) greatly improved the performance of the draw and removed the "textures created" debug log.
4) Finally sorting the object by texture and reducing the number of texture swap (drawing A A A A B B B istead of A B A B A B) gave me better results.
The app is now runnig at 30 fps on the desire and 60 fps on a galaxy S2 drawing around 500 object on screen.

Related

How to avoid strange structure artifacts in scaled images?

I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).

How many lines can Qt draw on-screen?

I am currently working on a Qt application to draw maps. I am trying to draw 400,000+ lines and it crashes after using ~2GB but I still have memory left on my machine. I am wondering if I am hitting some limit inside of Qt that is causing the problem. Anyone know if there is a limit to the number of things you can draw or if you can change this limit?
If it is helpful, I am coding in C++ with a class that has a member function to draw the lines. The code is roughly as follows
QPointF fromPoint;
QPointF toPoint;
fromPoint = foo( x );
toPoint = foo( y );
m_Painter.drawLine(fromPoint, toPoint );
//m_Painter is a QPainter
Edit: Turns out the problem was somewhere else in the code. It had to do with the custom caching that was being done. Though I am still interested if there is a limit to how many lines Qt can draw. Does anyone know?
QPainter executes its underlying graphics through QPaintEngine, which has several implementations (like qpaintengine_mac.cpp, qpaintengine_x11.cpp, or qpaintengine_preview.cpp).
Some devices are raster...and are likely drawing each line into an image buffer and throwing away the endpoints after that drawing is done. There should be no limit to the number of lines you can draw in that case.
If the target device is OpenGL, or to a printer that is doing some kind of PostScript-like output, then the limitations of that particular paint engine may well be a factor. You'd have to look at the specific one.
For example: if you trace down the X11 implementation of drawLine you'll see it passes through to drawPolygon() down through strokePolygon_dev()...and bottoms out at a call to XDrawLines:
XDrawLines(dpy, hd, gc, pts, numberPoints, CoordModeOrigin);
So there you have another abstraction layer...and so the question becomes whether the XWindows display parameter is guaranteed to be raster. (My guess would be that it is.)
Anyway, so the answer is "unlimited if raster. may depend otherwise--but the limitations (if any) are probably coming from the underlying device for the paint engine, not Qt."

Constrained (Delaunay) Triangulation

For a university project I need to implement a computer graphics paper that has been relased a couple of years ago. At one point, I need to triangulate the results I get from my simulation. I guess its easier to explain what I need looking at a picture contained within the paper:
Let's say I already have got all the information it takes to reconstruct the contour lines that you can see in the second thumbnail. Using those I need to do some triangulation using those siluettes as constrains. I have searched the internet for triangulation libraries like CGAL, VTK, Triangle, Triangle++, ... but I always ended up throwing my hands up in horror. I am not a good programmer and it seems impossible to me to get into one of those APIs before the deadline of this project passes.
I would appreciate any kind of help like code snipplets, tips, etc...
I know that the algorithms need segments (pairs of points) as input, so let's say I have got one std::vector containing all pairs of points defining the siluette as well as the left and right side of the rectangle.
Can you somehow give me a code snipplet for i.e. CGAL that I could use for my purpose? First of all I just want to achieve the state of the third thumbnail. Lateron I will have to do some displacement within the "cracks" and finally write the information into a VBO for OpenGL rendering.
I have started working it out with CGAL. One simple problem still drives me crazy:
It is possible to attach informations (like ints) to points before adding them up to the triangulator object. I do this since I need on the one hand an int-flag that I use lateron to define my texture coordinates and on the other hand an index which I use so that I can create a indexed VBO.
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2info_insert_with_pair_iterator_2_8cpp-example.html
But instead of points I only want to insert constraint-edges. If I insert both CGAL returns strange results since points have been fed into two times (once as point and once as point of a constrained edge).
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2constrained_8cpp-example.html
Is it possible to connect in the same way as with points information to "Constraints" so that I can only use this function cdt.insert_constraint( Point(j,0), Point(j,6)); before I iterate over the resulting faces?
Lateron when I loop over the triangles I need some way to access the int-flags that I defined before. Like this but not on acutal points but the "ends" defined by the constraint edges:
for(CDT::Finite_faces_iterator fit = m_cdt.finite_faces_begin(); fit != m_cdt.finite_faces_end(); ++fit, ++k) {
int j = k*3;
for(int i=0; i < 3; i++) {
indices[j+i] = fit->vertex(i)->info().first;
}
}

moving a spinning 3D object across the screen, making it face the correct way when it stops

The best example of what I am trying to achieve is on this youtube video
http://www.youtube.com/watch?v=53Tk-oGL2Uo
The letters that make up the word 'Atari' fly in from the edges of the screen spinning and then line up to make the word at the end.
I know how to make an object move across the screen, but how do I calculate the spinning so that when the object gets to its end position it's facing the correct direction?
The trick is to actually have the object(s) in the right position for a specific time (say t=5.0 seconds) and then calculate backwards for the previous frames.
i.e. before 5.0 seconds you rotate the object(s) by [angular velocity] * (5.0 - t) and translate by [velocity] * (5.0 - t)
If you do this, then it will look like the objects fly together and line up perfectly. But what you've actually done is blown them apart in random directions and played the animation backwards in time :-)
The CORRECT way of doing this is using keyframes. You can create the keyframes in any 3D editor (I use MAX, but you could use Blender). You don't necessarily need to use the actual characters, even a cuboid would suffice. You will then need to export those animation frames (again, in MAX I would use ASE - COLLADA would work with Blender) and either load them up at runtime or transform them to code.
Then it's a simple matter of running that animation based on the current time.
Here's a sample from my own library that illustrates this technique. Doing this once will last you far longer and give you more benefits in the long run than figuring out how to do this procedurally.

how to find out moment after rotationX has finished

i am playing around with the rotationX/Y/Z properties available in flashplayer since version 10. for testing purpose i created a cube and put canvas objects on three sides of it (top, front, bottom) and created a tween to get the values required for turing by 90 deg. turning the cube (a canvas) using rotationX = xx works well when the three side-canvas objects are small and filled with a not-to-complex element hierarchy. when using larger and more complex content it slows down. the next idea was to remove the canvas elements content and replace it by a snapshotimage of the content instead before starting the turn, after the turn is performed the original content is put back on the sides again. this results in a good perfomance increase. using a tween the last step of rotation is done in the function that is called as the tweenEnd handler. in this function also the process of copying the canvases content back is performed. unfortunately this results in a short hang of the cube right in that last rotation step, the reason for which is that rotation and copying back takes place at the same time.
so i could wait for some time after having called cube.rotationX = endValue by using a timer or setTimeout(func, 500), but this is ugly.
so my question is: after having called cube.rotationX = endValue a period of time is required to calculate data for the rotation and do the rotation itself. is there a way to find out the point in time when the rotation has ended, so that then the copying can be started ?
thank you in advance
tyler
There's no any default event, dispatching when rotation is completed. But I think of using callLater() function to copy back content. Try it.
that is exactly the point, there is not an event indicating the end of the rotation. the solution using callLater() instead of using setTimeout() appears to be an improvement however since waiting for a certain amount of time is always invloving some 'hope it works on machine x'. thank you very much for the hint !
greetings
tyler

Resources