qt3d and the oculus sdk - qt

Given qt3d's structure, is it possible to integrate the oculus sdk with a qt3d application?
I have tried but my two main obstacles are:
I cant use the textures from the texture swap chain created by the oculus sdk as a render target attachment
I am not able to call ovr_SubmitFrame at the end of each frame since qt3d doesnt have a signal that would allow me to do so.
Has anyone successfully gotten the oculus sdk to work with qt3d? If so, how did you overcome these issues?
Are there any plans for allowing the integration of VR SDKs (not just oculus') in qt3d in further releases?

You could probably do it with some sort of custom framegraph that encapsulated the stereo rendering functionality and included a custom component that could take the currently rendered content and submitted it to the SDK prior to the swapbuffer call.
Alternatively you could dive into the code that processes the framegraph itself and see how hard it would be to customize it to work against a VR API. I've done significant work with integrating Qt apps with VR, but not specifically with Qt3D.

The frame graph will indeed provide one part of the solution for the stereoscopic rendering setup. There is already an anaglyphic stereo example showing most of what you need that ships with Qt 3D.
To integrate the swap chain of the occulus SDK will require deeper integration. I do not know the details of the Occulus SDK as yet but we can take a look.
From what I can see you should be able to do something analogous to the Scene3D custom Qt Quick 2 item to be able to render to the textures provided by the Occulus SDK and to tell Qt 3D which OpenGL Context to use. See
http://code.qt.io/cgit/qt/qt3d.git/tree/src/quick3d/imports/scene3d?h=5.7
Nicolas, I also do not appreciate you publicly saying that KDAB are not much help. I only received an email from Karsten on Friday which I responded to despite being on vacation saying that we can help but it will be on a best efforts basis since you are not paying and I have a very full workload preparing Qt 3D for release at the end of the month along with Qt 5.7. Today is a public holiday in the UK, as you are aware, yet you are already saying detrimental things about us.
You were also directed to post to the interest#qt-project.org mailing list on the qt-forums as I do not tend to monitor SO or the qt-forums on a regular basis. You could have also emailed us directly or via the development#qt-interest mailing list.
We would be more than happy to set up a support agreement with you.

Related

Qt6 circular gauge issue

I am developing a qml app in qt5 with circular gauge. I want to port application qt 6. But in qt 6 qtquick extras is missing, so circular gauge is not available. Is qt planning to make it available in next releases of qt6? What can I do instead of using circular gauge in qt6?
Is there anyone who knows about it?
As far as I know there are no plans yet to port it to Qt 6. You could create a suggestion on Jira to express your interest in having it.
The code is here and here if you want to try to port it yourself or just use parts of it (keeping in mind the license, which is LGPL).
As you stated in the question, CircularGauge is not available in Qt6. So what can you do?
As a minimal effort, you can substitute CircularGauge with a functional similar component, for instance, RangeSlider. Of course, a RangeSlider looks nothing like a CircularGauge, but, it will, at least, allow you to compile and run your application. It will give you an application to test whilst you decide your options.
Then, as others have stated, you need to spend more effort allocated to porting. If you refer to the source code of CircularGauge, you see that they're using Canvas with a custom onPaint implementation. You could do the same in your port, or, you can find an alternative, e.g. Shape with ShapePath, etc. These efforts are non-trivial, and it boils down to the level of effort you wish to invest.

Writing a custom Qt Location GeoServices plugin to use geo-referenced image file as map source

I'm currently working on a Qt Quick application that will provide a map viewer for a smallish area (1 square KM or so), the map details for which will be provided in a single geo-referenced image file (GeoTIFF, geo-referenced PDF, ESRI Shape file etc.), along with display of current location, operator identified points of interest etc. It's primarily responsibility is the display of custom maps (as opposed to generic maps retrieved from public map image service providers (OSM, MabBox, ESRI etc)), and it will often be used in areas with limited connectivity
An extensive web search has identified others who have made similar enquiries in the past (here, Qt forums etc), and the general suggestions for solutions are as follows:
ArcGIS Runtime with Qt SDK Doesn't work for me as down the track I'm intending to target an embedded linux device using an ARM processor, and ArcGIS doesn't make source available for cross-compilation for arbitrary targets. They've recently produced an Android release, but nothing for ARM linux in general)
QGIS developer libraries GPL licence not compatible with my commercial development
Use the Qt Location Map component with a local tile server or offline tile collection (some plugins have recently introduced support for this) Seems a bit of a hack, as noted I'm primarily using custom maps, as opposed to offline copies of public map server images, and my images won't be big enough to really warrant tiling otherwise
It would be feasible to develop a Qt Quick component from scratch to do this, but given that the existing Qt Location Map component provides a well defined pre-existing front end interface for everything my map would need to do and has an extendable plugin based architecture, writing a custom Qt Location GeoServices plugin seems the most sensible and elegant way forward.
I've started examining the source code of the existing plugins, but can't shake the feeling that in a world containing 8 billion people, with "nothing new under the sun", this would have been done already if it was a good idea....
Would anyone with more familiarity with the Qt Location module care to comment?
Since geo-referenced images can be arbitrarily large, it is the standard to convert them into a tile pyramid, to be able to efficiently display them on any hardware (at the cost of doubling the size, at worst, depending on how many layers you want).
Even if you would write your own geoservice plugin, you most likely will end up (directly or by using 3rd party code) tiling your geotiff.
This said, QtLocation does allow you to use custom tilesets ( http://doc.qt.io/qt-5/location-plugin-osm.html, look for osm.mapping.custom.host) served in most ways (http, https, file, qrc, etc.).
So go ahead, fire up QGis, install the QTiler plugin, and convert your images.
If you need to serve these pictures over the net directly to the clients (thus needing to do the conversion on the client), you can either see what QTiler does, or build up your gdal pipeline (gdal_translate, gdalwarp and gdal2tiles), and ship the relevant gdal bits with your application.
If you need multiple images at the same time, you can either use multiple Plugin elements with different plugin parameters, or you can fork the osm plugin and support multiple custom hosts.
Based upon Paul's response, and a couple of similar responses to the same enquiry on Qt forums and mailing lists, plus my own investigation, I'd conclude the following:
Generating a custom Qt Location GeoServices plugin to directly provide map imagery from a geo-referenced image file would not be a great idea as the implementation would be less than straightforward, and in practice any non-trivial map image would likely be large enough that an initial tiling step, followed by use of one of the standard tiled mapping plugins referencing a local tile set would be more appropriate anyway.

Sketchup API for navigating around a model (eventually to integrate with Leap Motion)

I'm trying to use to SketchUp API to navigate around 3D models (zoom, pan, rotate, etc). My ultimate aim is to integrate it with a Leap Motion app.
However, right now I think my first step would be to figure out how to control the basic navigation gestures via the Sketchup API. After a bit of research, I see that there are the 'Camera' and the 'Animation' interfaces, but I think they would be more suited to 'hardcoded' paths and motions within the script.
Therefore I was wondering - does anyone know how I can write a plugin that is able to accept inputs from another program (my eventual Leap Motion App in this case) and translate it into specific navigation commands using the Sketchup API (like pan, zoom, etc). Can this be done using the 'Camera' and the 'Animation' interfaces (in some sort of step increments), or are there other interfaces I should be looking at.
As usual, and examples would be most helpful.
Thanks!
View, Camera and the Animation class is what you are looking for. Maybe you don't even need the Animation class, you might just be ok with using the time in the UI class. Depends on the details of what you will be doing.
You can set the camera directly like so:
Sketchup.active_model.active_view.camera.set(ORIGIN, Z_AXIS, Y_AXIS)
or you can use View.camera= which also accept a transition time argument if you find that useful.
For bridging input you could always create a Ruby C Extension that takes care of the communication between the applications. There are some quirks in getting C Extensions work for SketchUp Ruby as oppose to regular Ruby though - depending on how you compile it.
I wrote a hello world example a couple of years ago: https://bitbucket.org/thomthom/sketchup-ruby-c-extension
Though note that I have since found a better solution for Windows, using the Development Kit from Ruby Installers: http://rubyinstaller.org/
This answer is related to my comment above regarding the view seemingly 'jumping' when I assign a new camera to the current view using camera=, but not if I use camera.set.
I figured out this was happening because the camera FOV for the original camera was different, and the new camera was defaulting to an FOV of 30. By explicitly creating the camera with the optional perspective and FOV arguments from the initial camera solves this problem:
new_camera = Sketchup::Camera.new new_eye, new_target, curr_camera.up, curr_camera.perspective?, curr_camera.fov
Hope people find this useful!

Is there something like .NET Reflector for Qt?

Once I've seen a nice tool called .NET Reflector. It can show the entire object hierarchy of .Net binaries/apps (sorry if the term is wrong).
Is there something like this for Qt? As Qt has very good QMetaObject abilities, it should be possible to traverse object-trees, call methods(slots), change properties, etc.
I am currently re-factoring a Qt project. The naming of variables is very domain specific and I am not the expert in this domain. So, it is difficult for me to map a widget-variable to the widget on the screen. Such tool would be a great help for me to understand the code.
Thank you very much in advance!
For simple uses you might want to take a look at QObject::dumpObjectTree()
If you need something more advanced there's kspy
kspy: examines the internal state of a
Qt/KDE app KSpy is a tiny library
which can be used to graphically
display the QObjects in use by a
Qt/KDE app. In addition to the object
tree, you can also view the
properties, signals and slots of any
QObject. Basically it provides much
the same info as
QObject::dumpObjectTree() and
QObject::dumpObjectInfo(), but in a
much more convenient form. KSpy has
minimal overhead for the application,
because the kspy library is loaded
dynamically using KLibLoader. See /usr
/share/doc/kspy/README for usage
instructions. This package is part of
the KDE Software Development Kit.
It depends on KDE's klibloader so if you are not under KDE you have to modify it but it should be rather easy. Sources are here.
There's QSpy project. It inspects all QWidgets of running application. I'm not sure how well it works, because I couldn't use it on Mac OS X. Maybe on Windows it works better. https://github.com/sashao/martlet
http://qt-apps.org/content/show.php/QSpy?content=102287

Flex: Testing UI components at the click level?

I've been working on a Flex component and I'd like to write some automated tests for it. The trouble is, the UI testing tools I've looked at (FlexMonkey and Selenium Flex API) don't simulate "enough":
Most of the bugs which have come up so far relate to the way Flex deals with dragging and dropping, which these libraries can't simulate accurately enough. For example, I need to test a case where there is a "drop" event which occurs in the bottom half of a component – neither FlexMonkey nor Selenium Flex API can do that (they may simulate a mouse event, but they won't include coordinates).
So, is there any "good" way to automate that sort of testing?
Edit: After much research, it looks like the only piece of software that can do this is iMacros, which is Windows-only and the interface is... Lacking. So I'm going to be writing my own. Basically, it will put an HTTP interface on java.awt.Robot so code (in any language) can simulate mouse/keyboard events. If you're interested, PM me and I'll keep you updated.
Edit 2: I have published the first version of the framework I wrote, Blunderbuss, over at BitBucket: http://bitbucket.org/wolever/blunderbuss/ . You'll need Jython to run it (http://www.jython.org/), but after that the flex-client example should work.
Videos of Blunderbuss live over at Vimeo:
Automating Flex testing with Blunderbuss
Blunderbuss test suite running
At the moment this remains a proof-of-concept, as I haven't had the cycles to clean it up and make it more useable… But maybe enough people bothering me would give me that time :)
I've used Eggplant to test Flash and AIR apps without having to add any hooks into the code. It's a great tool but it's quite expensive. It simulates a real user by VNC-ing into a system and uses image recognition - among other things - to interact with the app.
I am definitely interested in your custom Java class, and (though I am not the best at Java (yet...)), I would be willing to help out if you're thinking of making this collaborative.
As to Flash MouseEvents. Unfortunately, there really isn't an accurate way to simulate the drag/drop experience in Flash. MouseEvents, when generated by the mouse, are handled in a very different way than regular events and while you could simulate actions by passing events into the handling functions, or by making the dispatcher fire a new DragEvent( DragEvent.DRAG_DROP..., it will not be the same as having the user interact with it. And for some functionality (like gaining access to the clipboard), nothing inside Flash will accomplish your goals.
To be honest, you're probably headed in the right direction -- using something which is not written in Flash to drive faked mouse events is probably your best bet.
I've never had to use it in Flex but i recently stumbled across some info on automation packages in the MS Surface SDK... after looking into it those classes automated user behavior which can be used for testing i.e. move a fake mouse to this point, perform this action. As you're using Flex mx.automation packages and classes. My guess (and hope) is that you'd be able to achieve what you want using these classes.
You could also try auto-hotkey - it is similarly a macro-editing program but it has proven to be very efficient and you can write scripts and set it up very easily.

Resources