How to display points with QT3D? - qt

Qt3D makes it very easy to display some mesh primitives:
m_torus = new Qt3DExtras::QTorusMesh();
but I would just like to display a collection of points. I haven't seen anything like
m_points = new Qt3DExtras::QPoints();
Is there a way to do this without writing lower level OpenGL?

Don't know if this is what you're looking for but check out Qt3DRender::QGeometryRenderer. I use it in a project to display map lines in a 3D scene.
There is a method to define how the vertex buffer data shall be rendered (where I use Qt3DRender::QGeometryRenderer::LineStrip instead of Qt3DRender::QGeometryRenderer::Points):
Qt3DRender::QGeometryRenderer::setPrimitiveType(Qt3DRender::QGeometryRenderer::Points);

AFAIK, There are no simple primitives like lines or points available in Qt3D 2.0, because there is just no one-size-fits-it-all solution. If you are lucky, someone will step up and add something to extras, else you have to write your solution yourself.
Qt Interest Mailing List Nov 2016 - Lines in Qt3D
There is however, a pcl point cloud renderer project on github!

Related

How to create map marker to show multi users' facing direction by here sdk?

I want to create a mobile application that can show where is your friends and in which direction they are facing. I want to use positionIndicator at first but I can't create one more positionIndicator on the map view. Then I turned to MapMarker, But I found I can't rotate it and scale it. And I tried MapLocalModel, but I don't think it's a good idea to use a 3d module to render a 2d object. Then I think I should create a new MapObject class. But the constructor of MapObject is package protect. I can't call it or override it. So, what's the correct way to implement it?
MapLocalModel in general is the right approach for a marker which could be rotated. Agree that for 2D object MapLocalModel is not the best approach however the alternative would be rotating the image used for MapMarker itself which might also have some performance hit.

Qt OpenGL data synchronization / Model/View implementation

I am trying to develop an application with Qt 5.5 and OpenGL. The basic work of the application will be to load simple objects, modify their positions in a scene and save them together with other attributes (material, name, parents/child relations etc...).
The only thing I am struggling about for a week now is that I really don't know how I should take on the problem of synchronizing data. Let's say I have some kind of SceneGraph class which takes care of all SceneObjects. Those SceneGraphs should be rendered in a SceneView-Widget which can be used to modify it's Objects via transformations. Now how would I tell every SceneView that an Object changed it's position?
I thought of the Model/View architecture for a moment but I am not really sure how this implementation should look like.
What would be the best way to handle Objects like that in different Windows/Widgets but still have one single piece of data?
SceneObject:
Holds the mesh-information (verticies, uvs, etc..)
Has a name (QString)
Has a material
Has a transform storing position, rotation and scaling information
(Important: these datatypes should be synchronized in all views)
SceneGraph:
Contains different SceneObjects and is passed to SceneViews
SceneView:
The QWidget responsible for drawing the Scene correctly in any QWindow.
Has it's own camera to move around.
Handles UserInput and allows transformation of SceneObjects.
You could use the signal and slot to observe position updates of SceneObjects and process them in SceneView.

Knowledge Graph (Demo) UI using sigma.js?

Are there any beginner-friendly tutorials to display graphs in the way Knowledge graph has been done?
I have the data is JSON format presented from a graphdb
For eg:
The closest I have found so far is Gelphi. Which also can be integrated with unity to produce a 3d Graph like this one https://www.youtube.com/watch?v=h_arRCf73Kg.
Then there is https://cayley.io/
https://n0where.net/opengraphiti-data-visualization-engine/
There is also https://www.maana.io/knowledge-platform/platform-capabilities/#maana-knowledge-graph . However, i have not tried to use/download their platform.
https://en.wikipedia.org/wiki/Force-directed_graph_drawing
Finally, I am working myself on learning unity to build a simple GUI which a user can identify nodes and edges and entities easily and move them around. So instead of just reading from a data base, also write to it through a UI.

Sketchup API for navigating around a model (eventually to integrate with Leap Motion)

I'm trying to use to SketchUp API to navigate around 3D models (zoom, pan, rotate, etc). My ultimate aim is to integrate it with a Leap Motion app.
However, right now I think my first step would be to figure out how to control the basic navigation gestures via the Sketchup API. After a bit of research, I see that there are the 'Camera' and the 'Animation' interfaces, but I think they would be more suited to 'hardcoded' paths and motions within the script.
Therefore I was wondering - does anyone know how I can write a plugin that is able to accept inputs from another program (my eventual Leap Motion App in this case) and translate it into specific navigation commands using the Sketchup API (like pan, zoom, etc). Can this be done using the 'Camera' and the 'Animation' interfaces (in some sort of step increments), or are there other interfaces I should be looking at.
As usual, and examples would be most helpful.
Thanks!
View, Camera and the Animation class is what you are looking for. Maybe you don't even need the Animation class, you might just be ok with using the time in the UI class. Depends on the details of what you will be doing.
You can set the camera directly like so:
Sketchup.active_model.active_view.camera.set(ORIGIN, Z_AXIS, Y_AXIS)
or you can use View.camera= which also accept a transition time argument if you find that useful.
For bridging input you could always create a Ruby C Extension that takes care of the communication between the applications. There are some quirks in getting C Extensions work for SketchUp Ruby as oppose to regular Ruby though - depending on how you compile it.
I wrote a hello world example a couple of years ago: https://bitbucket.org/thomthom/sketchup-ruby-c-extension
Though note that I have since found a better solution for Windows, using the Development Kit from Ruby Installers: http://rubyinstaller.org/
This answer is related to my comment above regarding the view seemingly 'jumping' when I assign a new camera to the current view using camera=, but not if I use camera.set.
I figured out this was happening because the camera FOV for the original camera was different, and the new camera was defaulting to an FOV of 30. By explicitly creating the camera with the optional perspective and FOV arguments from the initial camera solves this problem:
new_camera = Sketchup::Camera.new new_eye, new_target, curr_camera.up, curr_camera.perspective?, curr_camera.fov
Hope people find this useful!

How do I get the QDrag hotspot value in the dropEvent function?

I'm somewhat new to Qt, and I'm using Qt 4.8 to implement a graphical editor of sorts. Right now I've implemented dragging of rectangles around my widget using drag&drop. In my mousePressEvent function I generate a QDrag with appropriate MIME data (similar to the puzzle sample), and I just added a 'setHotSpot' call.
The dragging works just fine, but in my dropEvent function, I can't figure out a way to get back to the hot-spot setting in the original QDrag object - I don't appear to have access to it.
I've solved it for the moment by stuffing the hot-spot point into my MIME data (it's custom data anyway), but that seems wrong to me - it seems to me that there'd be some way within the Qt framework for me to get that hot-spot data in my dropEvent function.
please check the following example in Qt.
http://doc.qt.io/qt-4.8/qt-draganddrop-fridgemagnets-example.html
this example shows how to use drag an drop events in Qt.
In that example we see that adding the hot-spot's point to the MIME data does in fact appear to be the recommended way to get the hot-spot point from where the drag is initiated do the dropEvent.
I don't understand what you are trying to achieve...
The "hotspot" point is just an offset point relative to the pixmap representing the data being dragged, and thus is constant during the whole drag.
If you are looking for the initial drag point, you should indeed encode it into the mime data.

Resources