Would it be unreasonably expensive to animate the wallpaper in AwesomeWM? - awesome-wm

I was reading the source code for setting the wallpaper in awful.wallpaper and I found a few comments that suggest that animating the wallpaper would be very expensive.
For example:
-- Set the wallpaper.
local pattern = cairo.Pattern.create_for_surface(target)
capi.root.wallpaper(pattern)
-- Limit some potential GC induced increase in memory usage.
-- But really, is someone is trying to apply wallpaper changes more
-- often than the GC is executed, they are doing it wrong.
target:finish()
end
local mutex = false
-- Uploading the surface to X11 is *very* resource intensive. Given the updates
-- will often happen in batch (like startup), make sure to only do one "real"
-- update.
local function update()
if mutex then return end
mutex = true
gtimer.delayed_call(function()
-- Remove the mutex first in case `paint()` raises an exception.
mutex = false
paint()
end)
end
My question is, given the current state, would it be too expensive to animate the wallpaper with 60+fps? And if yes, would there be a way around this?

If you want an animated wallpaper, set a video player as your root window. If you have multiple screens and keep creating/uploading them 60 time per second, you get:
(1 byte per channel) * (3 channels) * (width) * (height) * (60), for a 4K display, that's 1.4 GB/s of data. Add the cost of painting, uploading and synchronizing and the awful.wallpaper own overhead, you literally don't have enough resource do it.
Te reason video players and 3D games work at all is because a lot of the magic happens in the GPU and the texture/frames are compressed to a fraction of their raw size.
That didn't prevent some people from doing it anyway, but it is a huge waste of resource. The way awesome does wallpaper by default is the least efficient way possible. It's also the most backward compatible way, which allows some terminal with "fake transparency" to work.

Related

OpenCL: know local work group size in advance?

I'm working on optimizing a separable image downscaler. My next step is reduction of multiple samplings (nearest) of the same texel by reading all necessary texels into local memory. Here begins the fun...
The downscaler is versatile, so it can downscale anything larger into anything smaller and even take sections of an image and downscale it into a destination image. Thus the final resolution divider never is a whole number. Most of the time it will be something around 3.97 or such. This means: I do not know the required size for that local array at compile time.
To me that means: before enqueuing a task, I'll have to create a local mem object of the required size.
How do I know what workgroup sizes OpenCL will select?
If there is no way, is there a "best practice" to overcome this problem?
P.S.: I'm writing for OpenCL 1.1 compatibility.
Since you are using images, the texture cache can be relied upon instead of using shared local memory.

Is it a bad idea to keep a fixed global_work_size and local_work_size when the number of elements to be processed grow randomly?

Often it is advised to keep the global_work_size the same as the logical amount of "elements" you must process. My application doesn't have such a thing, though. If I have N elements that need to be processed, then, after a single kernel pass, I will have M elements - a completely different number that doesn't depend on N.
In order to deal with this situation, I could write a loop such as:
while (elementsToBeProcessed)
read "elementsToBeProcessed" variable from device
enqueue ND range kernel with global_work_size = elemnetsToBeProcessed
But that requires one read per pass. An alternative would be to keep everything inside the GPU, by calling enqueueNDRangeKernel only once, with a fixed global_work_size and local_work_size matching the GPU layout and then use a master thread to synchronize the computation within.
My question is simple: is my intuition correct that the second option is better, or is there any reason to go with the first?
That is a tricky problem, which way to take. And depends on the global size values you are going to have and how much they change over time.
A read per pass: (better for highly changing values)
Fitted global size, all the work items will do useful work
Unfitted local size for the HW, if the work size is small
Blocking behavior in the queue, bad device utilization
Easy to understand and debug
Fixed kernel launch size: (better for stable but changing values)
Un-fitted global size, may waste some time running null work items
Fitted local size to the device
Non blocking behavior, 100% device usage
Complex to debug
As some answers already say, OpenCL 2.0 is the solution, by using pipes. But it is also possible to use another OpenCL 2.0 feature, kernel calling inside kernels. So that your kernels can launch the next batch of kernels without CPU intervention.
It is always good if you can avoid transferring data between host and device, even if it means little bit more work on the device. In many applications data transferring is the slowest part.
To find out better solution for your system configuration, you need to test both of them. If you are targeting to multiple platforms then the second one should be faster in general. But there are lot of things that can make it slower. For example the code for it might be harder to optimize for the compilers or the data access pattern might lead to more cache misses.
If you are targeting to OpenCL 2.0, pipes might be something you want to look at for this kind of random amount of elements. (Before I get some down votes because of the platforms not supporting 2.0, AMD has promised 2.0 drivers to come this year) With pipes, you can make producer kernel and consumer kernel. Consumer kernel can start work as soon as it has enough items to work on. This might lead to better utilization of all resources.
The tradeoff: The performance hit for doing the readback is that the GPU will be idle waiting for work, whereas if you just enqueue a bunch of kernels it will stay busy.
Simple: So I think the answer depends on how much elementsToBeProcessed will vary. If a sequence of runs might be (for example) 20000, 19760, 15789, 19345 then I'd always run 20000 and have a few idle work items. On the other hand, if a typical pattern is 20000, 4236, 1234, 9000 then I'd read back elementsToBeProcessed and enqueue the kernel for only what is needed.
Advanced: If your pattern is monotonically decreasing you could interleave the readback with the kernel enqueue, so that you're always keeping the GPU busy but you're also making them smaller as you go. Between every kernel enqueue start an async double-buffered readback of a copy of the elementsToBeProcessed and use it for the kernel after the one you enqueue next.
Like this:
elementsToBeProcessedA = starting value
elementsToBeProcessedB = starting value
eventA = NULL
eventB = NULL
Enqueue kernel with NDRange of elementsToBeProcessedA
non-blocking clEnqueueReadBuffer for elementsToBeProcessedA, taking eventA
if non-null, wait on eventB, release event
Enqueue kernel with NDRange of elementsToBeProcessedB
non-blocking clEnqueueReadBuffer for elementsToBeProcessedB, taking eventB
if non-null, wait on eventA, release event
goto 5
This will kepp the GPU fully saturated and yet will use smaller elementsToBeProcessed as it goes. It will not handle the case where elementsToBeProcessed increases so don't do it this way if that is the case.
An alternate solution: Always run a fixed number of global work items, enough to fill the GPU but not more. Each work item should then look at the total number of items to be done for this pass (elementsToBeProcessed) and then do it's portion of the total.
uint elementsToBeProcessed = <read from global memory>
uint step = get_global_size(0);
for (uint i = get_global_id(0); i < elementsToBeProcessed; i += step)
{
<process item "i">
}
A simplified example: global work size of 5 (artificially small for example), elementsToBeProcessed = 19: first pass through loop elements 0-4 are processed, second pass 5-9, third pass 10-14, forth pass 15-18.
You'd want to tune the fixed global work size to exactly match your hardware (compute units * max work group size or some division of that).
This is not unlike the algorithm for how work items cooperate to copy data into shared local memory regardless of work group size.
Global Work size doesn't have to be fixed. E. g. you have 128 stream processors. So, you make a kernel with local size 128 too. Your global work size can be any number, which is multiple to that value - 256, 4096, etc.
Though, size of local group usually is determined by hardware specs. In case you have more data to process, just increase number of local groups involved.

OpenCL : Id of the physical core being used

I'm trying to get something to work but I run out of ideas so I figured I would ask here.
I have a kernel that has a large global size (usually 5 Million)
Each of the threads can require up to 1Mb of global memory (exact size not known in advance)
So i figured... ok, on my typical target GPU I have 6Gb and I can run 2880 threads in parrallel, more than enough right ?
My idea is to create a big buffer (well actually 2 because of the max buffer size limitation...)
Each thread pointing to a specific global memory area (with the coalescence and stuff, but you get the idea...)
My problem is, How do I know which thread is currenctly being run (in the kernel code) to point to the right memory area ?
I did find the cl_arm_get_core_id extension but this only gives me the workgroup, not the acutal thread being used, plus this does not seem to be available on all GPUs, since it's an extension.
I have the option to have work_group_size = nb_compute_units / nb_cores and have the offset to be arm_get_core_id() * work_group_size + global_id() % work_group_size
But maybe this group size is not optimal, and the portability issue still exists.
I can also enqueue a lot of kernels calls with global size 2880, and there I obviously know where to point to with the global Id.
But won't this lead to a lot of overhead because of the 5Million / 2880 kernel calls ? Plus any work group that finishes before the others will be idle until all workgroups for this call have finished their job.
Any ideas to do this properly are very welcome !
Well, you are storing 1MB per WI for temporal computations (because you are not saving them, otherwise your wouldn't have memory).
Then, why not simply let it spill to global memory? Does the compiler complain? If it does complain, then you need other approaches:
One possibility is to create a queue (just a boolean array), of the memory zones empty for usage by the WorkGroups. And every time a new workgroup is launched it takes an empty slot and sets the boolean to "used" state. You can do this with atomic_cmpxchg() atomic operation.
It may introduce a small overhead to launch each WG, but it would be probably negligible if each WI is needing 1MB of global memory.
Here you have a small example of how to do atomic_cmpxchg() LINK

Qt4/Opengl bindTexture in separated thread

I am trying to implemente a CoverFlow like effect using a QGLWidget, the problem is the texture loading process.
I have a worker (QThread) for loading images from disk, and the main thread checks for new loaded images, if it finds any then uses bindTexture for loading them into QGLContext. While the texture is being bound, the main thread is blocked, so I have a fps drop.
What is the right way to do this?
I have found that the default behaviour of bindTexture in Qt4 is extremelly slow:
bindTexture(image,target,format,LinearFilteringBindOption | InvertedYBindOption | MipmapBindOption)
using only the LinearFilteringBindOption in the binding options speeds up the things a lot, this is my current call:
bindTexture(image, GL_TEXTURE_2D,GL_RGBA,QGLContext::LinearFilteringBindOption);
more info here : load time for a 3800x2850 bmp file reduced from 2 seconds to 34 milliseconds
Of course, if you need mipmapping, this is not the solution. In this case, I think that the way to go is Pixel Buffer Objects.
Binding in the main thread (single QGLWidget solution):
decide on maximum texture size. You could decide it based on maximum possible widget size for example. Say you know that the widget can be at most (approximately) 800x600 pixels and the largest cover visible has 30 pixels margins up and down and 1:2 aspect ratio -> 600-2*30 = 540 -> maximum size of the cover is 270x540, e.g. stored in m_maxCoverSize.
scale the incoming images to that size in the loader thread. It doesn't make sense to bind larger textures and the larger it is, the longer it'll take to upload to the graphics card. Use QImage::scaled(m_maxCoverSize, Qt::KeepAspectRatio) to scale loaded image and pass it to the main thread.
limit the number of textures or better time spent binding them per frame. I.e. remember the time at which you started binding textures (e.g. QTime bindStartTime;) and after binding each texture do:
if (bindStartTime.elapsed() > BIND_TIME_LIMIT)
break;
BIND_TIME_LIMIT would depend on frame rate you want to keep. But of course if binding each one texture takes much longer than BIND_TIME_LIMIT you haven't solved anything.
You might still experience framerate drop while loading images though on slower machines / graphics cards. The rest of the code should be prepared to live with it (e.g. use actual time to drive animation).
Alternative solution is to bind in a separate thread (using a second invisible QGLWidget, see documentation):
2. Texture uploading in a thread.
Doing texture uploads in a thread may be very useful for applications handling large amounts of images that needs to be displayed, like for instance a photo gallery application. This is supported in Qt through the existing bindTexture() API. A simple way of doing this is to create two sharing QGLWidgets. One is made current in the main GUI thread, while the other is made current in the texture upload thread. The widget in the uploading thread is never shown, it is only used for sharing textures with the main thread. For each texture that is bound via bindTexture(), notify the main thread so that it can start using the texture.

Loading images in Flex cause memory to go way up, in Internet Explorer 7 (& other browsers)

Loading images into Flex (size < 100kb) causes IE7 memory increase by a megabyte per image. What's going on here? Here is the code I have -- this for each image:
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(IOErrorEvent.IO_Error, Retry);//retry it
loader.contentLoaderInfo.addEventListener(Event.COMPLETE,
function(event:Event):void
{
var bitmap:Bitmap = (e.target.loader as Loader).content as Bitmap;
// save it & Load next
});
loader.load(new URLRequst( imageURL ));
This also happens in Chrome (2.0.172.33) & Firefox (3.0.10). How can reduce the memory usage?
Thanks!
I don't think that an increase of about 1 Mb per image is something to worry about necessarily. You point out that the images are less than 100 Kb, but you're probably looking at the wrong number: for example, a 640x480 jpg that I've just been sent takes ~48 Kb, but if you do the math, the raw image takes up 900 Kb (640 * 480 * 3 = 921,600). And if you use transparency, multiply by 4 instead of 3. The thing is that the player has to unpack the image in order to manipulate it. Storing the raw bytes alone for such an image can take up 1 meg or more (depending on its size).
Rather than focusing in reducing the memory usage per image (which is a rough estimate anyway), you're probably better off checking that you're cleaning up after yourself when you're done with the images. Failing to do that could lead to more serious problems. I agree with rhtx that Flex Builder's Profiler is a good tool for detecting leaks, if that's available to you. A simple test, in this case, could be loading an image, taking a memory snapshot, unloading the image, forcing a GC, taking a second snapshot and comparing it to the first one.
Are there any browsers or browser/OS combinations where you don't experience this issue?
I've worked through a couple of difficult memory issues and I've found that the Profiler is more than worth the extra $500 for the Flex Pro license, if that is an option for you.
I don't know why you're seeing a 1M jump each time you load an image, but I do know that the browser requests chunks of memory from the OS when it sees that it needs additional memory - sort of like buying in bulk. So, it makes some sense that the memory increase would be greater than necessary.
I may be (am probably) way off on this, but the anonymous function being used to handle the complete event feels like it could be causing a memory leak for you. My thought is that the contentLoader can not be removed and contains a copy of the image's bytearray. This is not a researched theory - if I have time later today, or tomorrow, I'll see if I can do some and back this up (or correct myself).
Try tracing through your function in the Event.COMPLETE event listener to make sure its hitting it exactly when you expect it to.
Might be better to just store the imageURL in an ArrayCollection then reference that by binding to an image itself... for example
<mx:Image source="{myAC.getItemAt(6).imagePath}" ...
or var tmpImage:Image = new Image();
tmpImage.source = myAC.getItemAt(i).imagePath;

Resources