Why javafx.geometry.Point2D doesn't have setLocation(x,y) method - javafx

In JavaFX the javafx.geometry.Point2D is missing a core method setLocation(x,y)
I wonder why this object made immutable?
For performance one would like to minimize the number of new instances created and therefore ability to reuse the Point2D would be nice.

IMHO the whole design of JavaFx 1.x is broken not introduced this. As alternative I recommend the much more well designed Scene Shapes API of JavaFx 2.0 with full accessibility to all properties. They omit a point there but a Rectangle could be easily used as replacement for instance.

Related

OpenGL context conflict between Qt and a third party game engine - how to resolve?

I am trying to use an external to Qt OpenGL context using a window handle that comes from Qt. The setup is:
A QMainWindow - contains various widgets including a QWebEngineView to make some web content available (in my case it's Leaflet for render and interact with OpenStreetMaps tiles)
Panda3D engine - rendered on top of my Qt application using a window handle of the central QWidget.
The setup works...when Panda3D is set to DirectX9 (aka pandadx9). When I switch the pipeline to OpenGL (aka pandagl) I get a black screen (Panda3D's window) and a very glitch OpenGL content (Qt). The reason is simple yet beyond my ability to fix it - QWebEngineView uses OpenGL. Somehow there is a conflict on a OpenGL context level between the engine and Qt. I am looking for a way to resolve this without removing the direct interaction with Panda3D's window (in my case using ShowBase) since the engine already offers a lot in terms of features for handling mouse events that I would otherwise be forced to reimplement in Qt and pass down to the engine. In addition I am not sure if I can make Panda3D render its scene as an FBO and how to load it in a - let's say - QOpenGLWidget. Activating shared OpenGL context before initializing QApplication allows multiple OpenGL widgets to render OpenGL content.
So far I have experimented to integrate Panda3D with Qt in two ways:
run two event loops in parallel - start the Panda3D engine in a child process and then using a pipe communicate with it
run a single event loop - use the event loop of Panda3D engine to also handle the main event loop of Qt by adding a task to the engine's task manager to run QApplication::processEvents() on every cycle
In both cases I am handing over a window Id (QWidget::winId()) as the parent window of Panda3D.
CASE 1 - parallel processes, separated event loops
This resolution comes with a lot of overhead. All the communication between the Qt content (running in the parent process) needs to be sent (via pipe hence involving IPC) to the engine (running in the child process). This adds a lot of code complexity and in case of my logging (using Python's logging module with a custom logging handler that writes records into an SQLite3 database) introduces a whole lot of issues. Concurrent write access to a file between processes is a tricky thing in general and I'm definitely not an expert. This case however does not exhibit the behaviour I'm describing below. In this case however the issue with OpenGL is not present!
CASE 2 - single process, single event loop
In my opinion this is a more elegant and is what I would like to go with (if possible). An example can be found here. I use the engine's main loop to process Qt's main loop. This is due to the fact that a 3D game engine usually would need to deal with far more events for a shorter period of time (processing rendering, audio, video, filesystem access, physics and so on) then a standard Qt GUI. This is also the recommended way as described in the official documentation of the engine. The other way around (Qt's main loop handling Panda3D's) is also possible. Imho neither have anything to do with my issue namely the moment I add anything Qt-ish that uses OpenGL, the problem I've described above occurs. On Windows this is not a huge deal breaker since I can also use DirectX for the engine, while Qt does it's OpenGL thing. On Linux it is not possible (without something like wine). In addition I want to use exclusively OpenGL including GLSL.
Here is a visual representation of case 1 and case 2 (but with mixed DX9 and OpenGL):
And below there are two visual representations of case 2 with only OpenGL:
While Panda3D offers CPU rendering (aka p3tinydisplay), QWebEngineView does not. Falling back to CPU rendering on the engine side is not an option considering the huge amount of polygons I have to render not to mention the fact that I can do something more useful with the CPU (e.g. processing the physics).
Last but not least I have seen a third integration attempt, which I quickly discarded - rendering the scene as an image to RAM, reading it in Qt, generating a QPixmap from it and painting on top of a QLabel. Needless to say this is a no go for my scenario due to the heavy hit on performance among others.
Any ideas how to tackle this?
I don't think event loops have to do with anything here; the problem is that by default child windows get the same device-context (DC) as the parent. In your case that's a problem because two different components (Qt framework, and Panda3D engine) try to ChoosePixelFormat and initialize OpenGL context twice on the same DC, which is not supported.
The proper solution is to create the Panda3D engine QWidget from a QWindow with the Qt::MSWindowsOwnDC style, which corresponds to the CS_OWNDC window-class style. Normally QWidget doesn't create any window at all -- but is rather implemented entirely within the Qt framework by drawing itself on the parent window.

FlashBuilder loosly coupled and reusable component architecture

I want my mxml or actionscript components to be reusable and loosly coupled. I am wondering if it is good practice to use the FlexGlobals.topApplication to dispatch and listen for events. For instance I want my login component to dispatch events to the topApplication so when i reuse that component in a different project I won't have to change anything being all applications have a topApplication.
My other option is to have a separate static class to handle event dispatching but then I am creating a dependency on that static class.
Any suggestions would be appreciated.
Thanks.
I would recommend that you read about event propagation and have your login component dispatch the event to "whoever" catches it as it bubbles up through the hierarchy.
http://livedocs.adobe.com/flex/3/html/help.html?content=events_08.html
I have to agree with the answer by Stian here for the most part. With regard to weltraumpirat's comment I feel dependency injection can be great but also adds a lot of complication with regard to debugging/testing IMO and if you're not actually going to have different implementations of an interface just adds a lot of garbage code to look through without any real benefit. I feel like Spring on the service layer side works out well because you can change out implementations for the data access layer (DAO) if you switch DBs or something of that nature but it's hard for me to see the benefit on the front-end.
I would not recommend using the topLevelApplication as you'll end up with something like cairngorm where you have this humongous set of events/event handlers happening at the top level. Not to mention if you follow their suggested model you end up with a bunch of pointless event classes that simply define a string (there's better and worse ways to go about it using Cairngorm but I'm not a fan of what I've seen in the wild).
A developer at my company wrote a custom MVC "micro-framework" that works great for us where we can attach a controller to any display object to handle events for it, this works wonderfully but does require the initial overhead of developing/testing it. It's built on top of the existing Event scheme in Flex so our MVCEvent class extends Event (ours just bubble by default as we tend to want this for the types of events we're creating where the controller could be at any level above the UIComponent dispatching the event, and can always opt to turn off bubbling, however starting with the Event base class means we can utilitze the built in EventDispatcher dispatchEvent() method). He wrote just about everything using an interface to define the methods for each part and only assuming objects implement a given interface to be used in a particular context (as in IMVCEvent, IMVCCommand) this way if the built in framework implementation doesn't work for your particular scenario you just need to create a new class that implements the same interface (if extension also doesn't work for your case). This gives a huge amount of flexibility yet at the same time we're generally able to just re-use existing implementations of events, commands, or controllers. In each app we're just defining new views and commands for things that are specific to the business rules of the application.
So what's that all boil down to, I suggest you roll your own as a library then re-use that library for your many projects. You will know your own library in and out and can tweak it as you see fit quickly without trying to understand the many use cases someone designed their MVC framework to handle.
I realize this isn't an ideal solution in terms of speed to get something done now, but I think it really is the best solution for the long haul (it's been great for us that's really all I can say).
Amendment here to acknowledge the existing Flex MVC frameworks available and appease the crowd.
Robot Legs
By the way see what the creator of robot legs has to say about using his code: His words not mine
Swiz
Mate
Stackoverflow question about flex frameworks

jogl picking example

hi guys
i am in trouble with add picking object in a JOGL project.
i know that this could be done with pick buffer.. but i can't find examples
anyone?
In general, as you are probably aware, JOGL code translates directly from any other OpenGL examples you might see on the web.
GL_SELECT based picking seems to be very much out of favour these days; deprecated in the spec and poorly implemented by drivers.
Alternatives you can use are:
Rendering each object with a unique color (and all lighting / fog etc disabled) so you can determine which object the mouse is over via glReadPixels. (Clearing buffers after the picking stage so that you can then render your normal graphics). This approach is explained by the top rated answer in OpenGL GL_SELECT or manual collision detection? for example.
Ray-casting into your geometry (see the selection FAQ link below). This also means that you don't have to have an active gl context in the thread you call the code from, fwiw.
I've used both of these methods in the same application, currently having good results with the latter, but since most of the objects in that application are spheres it is a lot cheaper than it might be with arbitrary models.
http://www.opengl.org/resources/faq/technical/selection.htm

How to use Qt Model/View framework with the Graphics View framework

I am working on a mapping application and need to display the data objects using a table, a form and as graphical objects in the map. I'm using PyQt, but that's not really important as this is a Qt question not a Python question.
If I only needed the table and form views this would be easy, I'd just use the Qt Model/View framework. However I need the map view to provide functionality only really available using the Graphics View framework, which is essentially it's own Model/View framework with the QGraphicsScene acting as the data model.
I can think of two ways to do this. One would be to start with an authoritative model subclassed from QAbstractItemModel, link it to a subclass of QAbstractItemView and from there generate and update QGraphicsItems in the scene. This looks ugly though because I'm not sure how to handle user interaction with and changes to the data items though interaction with the QGraphicsItems.
The other way I can think to do it is to treat the QGraphicsScene as the authoritative data source, storing the data object in each QGraphicsItem's .data() property. I'd then subclass QAbstractItemModel and write it so that it accesses the data in the scene as it's data store, the other views would then use this as their model. How would I propagate changes to the data in the scene up to the model though?
Whichever approach I take, it looks like there's a gap not handled by the frameworks. In Model/View all changes are assumed to be made in the model. In Graphics View all changes are assumed to be made in the scene.
So which approach would you choose QAbstractItemModel(authoritative)->QAbstractItemView->QGraphicsScene or alternatively QGraphicsScene(authoritative)->QAbstractItemModel->Other Views. Why would you choose one over the other and what gotchas do you anticipate? Has anyone else needed to bridge this gap between Qt's twin model/view frameworks and how did you do it?
QAbstractItemModel(authoritative)->QAbstractItemView->QGraphicsScene
Without a doubt. I have done this before, it does require a bit of duplication (at least some that I couldn't avoid) but nothing too bad.
This also allows you to represent your data in standard views along with the scene which is quite nice.
My best advice would be to store a QHash of QPersistantModelIndex to QGraphicsItem and a QGraphicsScene in the QAbstractItemView you create. This allows you to quickly go between Model/View land (QModelIndex) to Graphics View land (QGraphicsItem)

Qt plotting application

Currently I'm trying to develop some simple plot prototype and I'm struggling with some kind of white/empty sheet syndrome.
I'm back to Qt after 2 years, so I feel quite retarded.
My application should:
plot and manage custom layers of data
plot on custom canvas background
manage markers on plot
My plan is to use following design:
QGraphicsScene /View/Item as a sprite like management widgets for background, markers, pointers and other "bitmap" objects etc.
QPainter/ Qpixmap or QPicture for actual data layers - and if possible set them as QGraphicItem to simplify management of dynamic graphics
I don't want to use Qwt or similar library, unless I can plot with it on custom background (I don't like the look of the qwt's graphic style).
Is my plan proper in scope of qt class usage and composition?
I'd like to have at least clear overview of the classes which should be involved for this kind of prototype.
Thanks in advance.
P.
I think you have the basic idea with QGraphicsView. Here are a few resources which might help:
Graphics View
Diagram Scene
If you want to use the new animation and state set classes:
Stickman
Also, take a look at gunnar's labs blog. He recently did a series on graphics performance.
All of these are strictly Qt (animation and state set are in 4.6). They are in C++ but hopefully you can translate what you need to python.
You don't say much about your project for me to propose a more helpful answer, but have a look at the Qt demos involving the graphics view, especially diagram scene and 40000 chips. I think you will find them inspiring for what you want to do.
May be MathGL is appropriate for you. It have Qt widget or you can use RGBA image directly to combine it with any background in your widget.
I recommend you to use QCustomPlot which is a Qt C++ library. It focuses on making good looking, publication quality 2D plots, graphs and charts and also has high performance for realtime visualization applications. You get it here: http://www.qcustomplot.com/
You may want to take a look at the Core Plot framework. Core Plot is OS X specific, but it is built on the the OS X Core Animation system which has a lot of conceptual similarity to the Qt Graphics View Framework. You'll have to learn to visually parse the Objective-C (a less-than-two day process for any competent C++ developer), but you should be able to see the general architecture relatively easily. The Core Plot wiki has some nice high-level documentation that might set you on your way without even needing to look at the code.

Resources