what is the difference between Qt painting and video player? - qt

A video player can interpret a file(mp4,avi...)to picture on screen.
Qt can draw lines,rectangles,pixmap...to picture on screen.
What is the difference between them?

You're comparing apples to oranges. They are completely different.
A video player reads a video stream from a file and decodes it using a decoder (DivX, XviD, x.264, etc.), displaying the output on your screen.
Qt4's object painting allows you to paint pixels onto a QObject. That's basically it.
Video decoders are heavily optimized and some even use GPU acceleration. Qt4's object painting isn't made for rapidly-changing frames and is used do draw basic things.

Related

What is the best approach to displaying drawings on different-sized Paper JS views?

Context
I'm using Paper JS to build a multi-player drawing game. At any given point, a single user will be drawing to his/her canvas, and the data will get sent to the server to be broadcast to other users. Each user's canvas may be of variable size, and it resizes as the window resizes while maintaining the same aspect ratio.
The goal is for each user to have a scaled representation of the drawing (i.e. everything fits inside the different sized canvases and the content doesn't get distorted). This should be the case if a drawing transfers from a larger canvas to a smaller canvas, and vice-versa. The project supports a drawing tool as well as an eraser tool.
Problem
Approach 1 below scales the drawings the way I want, but there is substantial lag. Approach 2 deals with the lag, but doesn't scale the drawings the way I want.
My understanding is that SVGs will scale nicely whether they are scaled-up or scaled-down. But rasters are pixel-based and will become "blurry" when scaled-up. When I test approach 2, a drawing from a smaller canvas gets blurred on a larger canvas. The result is the same whether I use export/importJSON or export/importSVG. Is there a way to get both good performance and scaled-drawings? See below for example implementations of the tools.
Approach 1: Paths + Symbols:
Every path/symbol placement is kept in the active layer.
The eraser tool draws a white rectangle (defined as a symbol) to
mimic an "erasing" effect.
This works fine as a demo, but will start to lag very quickly as the
number of items in the active layer increases. The eraser tool in
particular will not function smoothly.
Relevant sketch
Approach 2: Rasterization:
After a path is drawn or a symbol is placed, the active layer is
rasterized and its children are removed.
This seems to work quite well on a single canvas, and the eraser
doesn't lag like in the first approach. There are only 2 items in the
active layer after each rasterization.
When a drawing from a client with a smaller canvas is exported (using exportJSON or exportSVG) to a client with a larger canvas, the result is "blurry".
The above also happens when a drawing is made and then the canvas is re-sized to be larger.
Relevant sketch
You could send your objects as SVG and rasterize them once received.

How is it possible to get higher resolution video than 4K on recent model iPhones? (Noob here)

I first apologize if this question is in the wrong place or is formatted wrong. I am young and this is my first post ever here. I was planning on creating a camera app when I get my macbook but before that I was looking at other cameras and noticed that there is an app that shoots 4000x3000 24fps H.265 video on my iPhone 7 plus. How is this possible? Does the API easily let you choose resolutions above 4K or do you have to use a trick?
No, it's not really possible. An app that's allegedly doing it is interpolating pixels between the actual pixels that the camera is delivering. That means it's basically making up extra pixels, typically by averaging the colors of nearby pixels.

Improving performance on bitmap images decoding on translating into view

I'm developing an application which involves a slides translating into view as the primary navigation mechanism. The first slide to come in involves several super-imposed PNGs at roughly 2000px squared with transparencies, and there's a notable framerate stutter as the images come into view.
Using Chrome Dev Tools' Timeline feature I've established that while most of the individual Paint tasks take under 5 milliseconds each, the significant outliers are those Paint events whose subtasks include decoding the PNGs, which take between 50 and 100 milliseconds one after the other, seemingly at the moment the images come into view.
Ideally I would like to decode the bitmaps ahead of time, but I can't think of a way of forcing this behaviour without actually rendering them in view. Any ideas?
If the bottleneck is decoding, then pre-render your images to a canvas, and then either draw those pre-rendered canvases to your view canvas or translate them in using CSS.

Why does wxWidgets update drawing slower than Qt?

I am using wxWidgets to draw a large flow chart, i.e. 625 x 26329 pixels. The program was transported from Qt to wxWidgets. It is easy in layout with a main frame which has a customized scroll window inside. The scroll window draws part of the chart every time within updated client region.
Now Qt and wxWidgets make much difference. When scrolling vertically with mouse rolling, Qt refreshs painting the chart very smoothly, while wxWidgets is slowly with flicker.
Can anyone tell me how to make the painting efficiently?
Are you sure it's slow? I would be wondering, I encounterd a different experience.
You mention flickering. Flickering is mostly result of too many refresh calls.
To prevent this you must use double buffering and this is the key.
Double buffering means to draw all stuff offscreen into a image / bitmap and after everything is drawn the image/bitmap is drawn fully (this is done really fast so no flickering :)! ).
Qt uses for default double buffering. That's why it looks everytime smooth.
However the downside of this approach is that it consumes performance.
wxWidgets doesn't bound you to that. Instead it says, it's your task to get double buffering.
Also you should look whether you aren't clipping the region you're drawing. Clipping under Windows with wxWidgets gave me a really better performance.
PS:
Yes, old question but I think it's still needed to know how the facts are.

QGraphicsView for 2D RPG

I would just love to ask a question about the possibly of creating a 2D RPG game in Qt QGraphicsView
A game similar to battle heart - http://www.youtube.com/watch?v=0VqlJ_AvFS8
Why am thinking of using Qt?
Qt is cross platform, and the support for mobile platforms like iOS and Android is increasing fast
I want to save the image on HD as SVG
I want to render the images on the fly ( For the example, when the game is loading ) into pixmap images for better performance after scaling them to the appropriate screen size ( So we cab have a ++, better performance, and infite support to any screen size )
What do you guys think about Qt? Do you have any other good options,
Qt make converting SVG to PNGs as easy as it could be, so that's the killing feature why am sticking to Qt
Bests
I've done this, and I can confirm that Qt is a perfectly good option, as long as you're not particularly concerned with download size (you're probably going to end up with a minimum of about 30 megs). You might consider looking into QML for handling your UI animations, as it's particularly well suited for that sort of thing.
I would strongly recommend using the OpenGL 2 backend, as it's fast, and it allows for GLSL shaders, which are good for special effects. It's also possible to use a QGLWidget as the background so you can do direct opengl drawing if needed.
Edit: Source is available at https://github.com/lendrick/Orange-Engine/wiki

Resources