UDK Using Mobile Emulator without huge load times - unreal-development-kit

My spec are:
4Gb RAM
GTX560 Ti
Dual Core Intel 2.8 Ghz processor
UDK takes like 15 minutes to start up when mobile features are on. While launching it stays for 99% of the time on the line Compiling shaders for material MOBEMU Lit/Unlit and all possibilities to render it.
So how to get away from this?
Why is it taking so long? Thanks in advance!

A quick google serach for MOBEMU revealed: http://www.3dbuzz.com/forum/threads/193948-QUESTION-Disabling-MOBEMU-at-Startup

Related

Mouse pointer lags in QT5 app on TI Sitara

We are using TI Sitara AM33 system on chip with 600 Mhz clock and 256 Mb ram.
OS is OE Yocto v2.1 Krogoth, kernel 4.4.19. Video driver - DRM/KSM
We are having issues with mouse performance.
I have made a little vedio to demonstrate the effect:
https://www.youtube.com/watch?v=5dRDGzhcnn0
Note how mouse pointer is moving smoothly on the blank area of the window and lags over at controls. It's as if it is going through jelly. If you have more controls on the window, mouse becomes so laggy, it's unusable. CPU load is minimal though.
There could be no error in the example app in the vedio - we created a blank QT Widget project, put the controls on the form and that's it, it is not doing anything else at all.
Has anyone seen such mouse issues?
If you're not using an X server, then you need to check what platform plugin is Qt using on your platform. Perhaps that plugin is broken or not the best choice in your situation.
Your application is also very unlikely to use GPU in any capacity other than to composite the windows (if at all), so the CPU load being low is rather telling.
It seems as if the event dispatch system on your platform was very slow the more widgets there are. This is unlikely to have much to do with the graphics side of things. In a process of elimination perhaps you could first benchmark the performance of synchronization primitives (QBasicMutex and QMutex) and atomic integers and pointers to ensure that they are configured correctly for your platform.

How much a Flash memory of an MPC can be flashed/erased (Flash Endurance , life cycles)

I began programming the MCP5748G, the problem is that i'm working on an existing application and I'm adding a module that depends on other modules.
The module i'm developing is working well but when combined with the others modules both have to be modified. So for this purpose i'm doing a lot of debugging and re-programming of the flash memory.
Sometimes i could go up to 100 flash/erase cycles per day.
The developpement should take a while let's say 2 month and i don't know if the MCU can resist this period.
That's why i'm wondering for
What is the maximum of flash cycles for this MCU ? : the documentation is too long and i couldn't find the information on it.
And also i'm flashing and erasing just a specified blocks of the flash memory.The total flash is 5mb but i'm using 2mb only. I flash per blocks of 256kb , i want to know if flashing just a block affects the other blocks too and also if the performance of the flash memory is degrading after each erase/flash operation.
I know that every MCU support different flash cycles but if you have some infos on how to find the answer on the doc it would be very helpfull too.
Thanks

Resizing images (jpeg or decompressed image)

In my last question I asked whether there was a better way to rotate images than I had thought of. I ended up discovering jpegtran and have since found libjpeg-turbo.
Now I am looking for a better way to resize the images (jpegs) than imagemagick and graphicsmagick.
Is there a specialized commandline tool to resize the images in a more efficient way than imagemagick or graphicsmagick? Maybe the resizing can be done on the GPU using opencl or opengl?
The provided hardware is the same as in the other post:
Intel Atom D525 (1,8 Ghz)
Mobility Radeon HD 5430 Series
4 GB of RAM
SSD Vertility 3
Check this link out: http://leocharre.com/articles/faster-image-resizing-in-linux/
In particular the author mentions that imgresize is faster than imagemagick, and epeg is extremely fast.
epeg (http://www.systhread.net/texts/200507epeg1.php) seems quite well documented for generating thumbnails. If the quality is good enough, this could be the solution.
OpenCL is a standard for cross-platform, parallel programming of modern processors found in personal computers, servers and handheld/embedded devices. It's directly supported by ATI. You'll need to get AMD APP SDK (formerly known as AMD Stream SDK) to get GPU support (also check out this getting started guide).
Take a look at Intel's IPP - Integrated Performance Primitives. It's a multi-threaded software library of functions for multimedia and data processing applications. Among other features, it's has functions to resize images (bilinear, nearest neighbor, etc). Unfortunately, it is not free (cheapest version costs $199).
VIPS is a free image processing system. It claims that compared to most image processing libraries, VIPS needs little memory and runs quickly, especially on machines with more than one CPU. See the Speed and Memory Use page for a simple benchmark against other similar systems.
You can actually do a lot of bulk processing like this with GIMP's CLI options.
http://www.gimp.org/tutorials/Basic_Batch/
There is also djpeg and cjpeg from the Independent JPEG Group which can rescale and image to an M/N fraction. Not perfect but very fast.
Simply use FFMpeg.exe. It can resize , convert , change quality and so on.
And also it works with almost all known types of videos/audios/pictures.
It works in linux/unix too, and there is open source code for it written in C++.
You can get it Here (for Windows/compiled exe) or Here (source code and so on).
If you are developing a program, I recomend you to use standard GDIPlus library.
It does everything with pictures.

Developing with OpenCl on ATI and Nvidia on the same time

our workgroup is slowly trying a little bit of OpenCl in a side project. So far 'everybody' is working on NVIDIA Quadro FX 580. Now we are planning to buy new computers for new colleages and instead of the FX 580 we could buy ATI FirePro V4800 instead, which costs only 15Eur more and give us 1Gig instead of 512Gig of Ram which will benificial for our data intensive tasks.
So, how much trouble is it to develop OpenCl code at the same time on Nvidia and ATI?
I read the following SO question, Running OpenCL on hardware from mixed vendors, which was very pessimistic about developing on/for different vendors. On the other side, the question is already a year old.
What do you reccomend?
I have previous worked extensively with CUDA programming language.
I have been planning to start developing apps using OpenCL. As you mentioned one of the best features with OpenCL is running on many vendor hardware (Intel, AMD and Nvidia).
One project that I came across that used openCL extensively for large scale development is http://sourceforge.net/projects/hypgad/. It might be a good idea to look at the source code from this group and understand how they have developed their application on so many hardware including sony cell processor.
Another approach would be to use PyOPENCL, which provides higher abstraction than OpenCL and can significantly reduce the coding effort.
Do you need the code to run unchanged on both bits of hardware? If so you may have to develop for a limited subset of common functions.
If you can run slightly different c ode on each you will probably get better performance - in CUDA/OpenCL you generally have to tune the algorithms for the amount of ram, number of GPU engines anyway so it shoudldn't be much more work to also tweak for NVidia/AMD
The biggest problem is workgroup sizes. Some ATI cards I have used crash at above 64, but then it may be the Apple OSX 10.6 drivers I am using.
Developing for both ATI and NVIDIA is actually not too difficult so long as you avoid using any part of either vendor's SDK. Stick to OpenCL as it is defined in the OpenCL spec. (www.khronos.org/opencl) and your code will stay syntax portable. Due to differences in the underlying architectures, performance portability may be an issue. Local & Global worksizes really have to be determined independently for each card to maximize performance. Another thing to pay attention to is the types being used. Vector types (float2, float4) are especially useful on ATI cards, as each processing element actually contains 4 execution units (one for each RGB color channel, plus aplha).

Is it possible to use OpenCL for PowerVR SGX530 GPU device?

Is it possible to use OpenCL for PowerVR SGX530 GPU device? I have to write image recognition software that can run on Droid X smartphone. I would greatly appreciate it if someone could provide links, references, citations, sample code.
It seems it is, but it depends on the SOC vendor, have a look at this:
http://www.imgtec.com/forum/forum_posts.asp?TID=194
Imagination Technologies say that the gpu has OpenCL 1.0 embedded capabilities, but it depends on the SOC vendor whether a driver is available or not
I found that it is not possible to use OpenCL. I'd have to rewrite my algorithm to OpenGL and use shaders and vertexes - then I can gain "General Purpose" programming (welcome back to past, about 4-5 years back to be more exact).
Take a look at the following thread elaborates on what is possible and not possible to do up to date (14th of Nov,2010):
link text
I've seen this example from the folks from Nokia:
http://www.hotchips.org/archives/hc21/1_sun/HC21.23.2.OpenCLTutorial-Epub/HC21.23.270.Pulli-OpenCL-in-Handheld-Devices.pdf
So I ask myself, is there any SDK from any mobile platform/os out there that I could use to test some of my desktop apps to an embedded app? I'd really, really appreciate to be able to program opencl on mobile/tablets systems. Vertex/fragment shader are not much of help because their specs for embedded systems do not include all the extensions we would need to rewrite our opencl code to shader.

Resources