Qt EvdevTouch plugin alterning coordinates - qt

I have an embedded Qt application that used to be able to interface with my touch-screen just fine using the EvdevTouch plugin but for some reason it has now started to alter the coordinates received from the driver in a very strange fashion.
In order for me to now get something close to the actual Y coordinate of a touch I have to calculate it as:
y = y + (y / 640 * 150)
I should mention though that this is actually the x coordinate coming from the screen because my scene is rotated 90 degrees.
I can confirm that the Linux driver is fine looking at the evtest output and when I run the application through xinput rather than evdevtouch then the coordinates are also correct. I don't want to use xinput though because it doesn't work as nice as EvdevTouch used to for multitouch.
PS. My Qt build is the latest one on the Jethro branch of meta-qt5 (5.5) for Yocto.

Related

Verify size and coordinates in Selenium

I have a question - did Python + Selenium give the opportunity to verify size and coordinates on the web-page?
I mean I have a mockup in Figma with the size of the element and his coordinates. And I want to know can I verify these values by Python + Selenium on the Web Page?
Maybe we can't use Figma, but maybe you can verify coordinates in the browser, and then write them in the test-cases?

Using Geo-coordintes Instead of Cartesian to Draw in Argon and A-Frame

I would like to create a GPS drawing program in Argon and A-Frame which draws lines based upon people's movements.
Lines can be drawn in A-Frame with, for example, the meshline component which uses Cartesian points:
<a-entity meshline="lineWidth: 20; path: -2 -1 0, 0 -2 0</a-entity>
If I were to do this with a GPS device, I would take the GPS coordinates and map them directly to something like Google maps. Does Argon have any similar functionality such that I can use the GPS coordinates directly as the path like so:
<a-entity meshline="lineWidth: 20; path: 37.32299 -122.04185 0, 37.32298 -122.03224</a-entity>
Since one can specify an LLA point for a reference frame I suppose one way to do this would be to conceive of the center LLA point as "0, 0, 0" and then use a function to map the LLA domain to a Cartesian range.
It would be preferable, however, to use the geo-coordinates directly. Is this possible in Argon?
To understand the answer, you need to first understand the various frames of reference used by Argon.
First, Argon makes use of cesiumjs.org's geospatial math libraries and Entity's so that all "locations" in Argon must either be expressed geospatially OR be relative to a geospatial entity. These are rooted at the center of the earth, in what Cesium calls FIXED coordinates, but are also know as ECEF or ECF coordinates. In that system, coordinates are in meters, with up/down going through the poles, east/west going through the meridian (I believe). Any point on the surface of the earth is represented with pretty large numbers.
This coordinate system is nice because we can represent anything on or near the earth precisely using it. Cesium also supports INERTIAL coordinates, which are used to represent near-earth orbital objects, and can convert between the two frames.
But, it is inconvenient when doing AR for a few reasons:
the numbers used to represent the position of the viewer and objects near them are quite large, even if they are very close, which can lead to mathematical accuracy issues, especially in the 3D graphics system.
The coordinates we "think about" when we think about the world around us have the ground as "flat" and "up" as pointing ... well, up. So, in 3D graphics, an object above another object typically has the same X and Z values, but has a Y that's bigger. In ECEF coordinates, all the numbers change because what we perceive as "up" is really a vector from the center of the earth though us, and is only "up" if we're on the north (or south, depending on your +/-) pole. Most 3D graphics libraries you might want to use (e.g., physics libraries, for example), assume a world in which the ground is one plane (typically the XZ plane) and Y is up (some aeronautics and other engineering applications use Z as up and have XY as the ground, but the issue is the same).
Argon deals with this, as do many geospatial AR systems, by creating a local coordinate system for the graphics and application to use. There are really three options for this:
Pick some arbitrary (but fixed) local place as the origin. Some systems, which are built to work in one place, have this hard-coded. Others let the application set it. We don't do this because it would encourage applications to take the easy path and only work in one place (we've seen this in the past).
Set the local place to the camera. This has the advantage that the math is the most "accurate" because all points are expressed relative to the camera. But, this causes two issues. First, the camera tends to move continuously (even if only due to sensor noise) in AR apps. Second, many libraries (again, like physics libraries) assume that the origin of the system is stable and on the earth, with the camera/user moving through it. These issues can be worked around, but they are tedious for application developers to deal with.
Set the origin of the local coordinates to an arbitrary location near the user, and if the user moves far from it, recenter automatically. The advantage of this is the program doesn't necessarily have to do much to deal with it, and it meshes nicely with 3D graphics libraries. The disadvantage is the local coordinates are arbitrary, and might be different each time a program is run. However, the application developer may have to pay attention to when the origin is recentered.
Argon uses open 3. When the app starts, we create a new local coordinate frame at the user's location, on the plane tangent to the earth. If the user moves far from that location we update the origin and emit an event to the application (currently, we recenter if you are 5km away from the origin). In many simple apps, with only a few frames or reference expressed in geospatial coordinates (and the rest of the application data expressed relative to known geospatial locations), the conversion from geospatial to local can just be done each frame, allowing the app developer to ignore the reentering problem. The programmer is free to use either ENU (east-north-up) or EUS (east-up-south) as their coordinate system; we tend to use EUS because it's similar to what most 3D graphics systems use (Y is up, Z points south, and X is east).
One of the reasons we chose this approach is that we've found in the past that if we had predictable local coordinates, application developers would store data using those coordinates even though that's not a good idea (you data is now tied to some relatively arbitrary application-specific coordinate system, and will now only work in that location).
So, now to your question. Your issue is that you want to use geospatial (cesium's coordinates, that argon uses) coordinates in AFrame. The short answer is you can't use them directly, since AFrame is built assuming a local 3D graphics coordinate system. The argon-aframe package binds aframe to argon by allowing you to specify referenceframe components that position an a-entity at an argon/cesium geospatial location, and take care of all the internal conversions for you.
The assumption when I wrote that code was that authors would then create their content using the local, 3D graphics coordinates, and attach those hunks of graphics to a-entity's that were located in the world with referenceframe's.
In order to have individual coordinates in AFrame correspond to geospatial places, you will need to manage that yourself, perhaps by creating a component to do it for you, or (if the data is known at the start) by converting it up front.
Here's what I'd do.
Assuming you have a list of geospatial coordinates (expressed as LLA), I'd convert each to a local coordinates (by first converting from LLA to Cesium's FIXED ECEF coordinates and creating a Cesium Entity, and then calling Argon's context.getEntityPose() on that entity (which will return it's local coordinates). I would pick one geospatial location in the set (perhaps the first one?) and then subtract it's local coordinates from each of them, so that they are all expressed in local coordinates relative to that known geospatial location.
Then, I'd create an AFrame entity attached to the referenceframe of that unique geospatial entity, and create your graphics content inside of it, using the local coordinates that are expressed relative to it. For example, let's say the geospatial location is LongLat = "-84.398881 33.778463" and you stored those points (local coordinates, relative to LongLat) in userPath, you could do something like this:
<ar-scene>
<ar-geopose id="GT" lla=" -84.398881 33.778463" userotation="false">
<a-entity meshline="lineWidth: 20; path: userPath; color: #E20049"></a-entity>
</ar-geopose>
</ar-scene>

Project Tango strange rotation visualisation

I am working on 3D reconstruction with tango. Our system is quite similar to KinectFusion, which uses voxel representation, but use Tango as tracker. Left image (in video linked below) is rendered by Raycast at current pose (given by tango) in real time. Raw pose converted by GetOC2OWMat() as in code examples, in addition sign of tx and rx are flipped to cope with our system. Everything works fine except ration in Z axis, which changes angle in rendered image. I guess coordinate system conversion is not done properly, but depth integration is working if no Z rotation is involved. I have also checked det(R) is always 1.
Video
It sounds like you are not factoring in intrinsics - have you accounted for camera and device IMU frames ? You need these to fully re-establish original viewpoint, i.e. both camera and device imu frame matrices need to be multiplied in to your stack
Sorry that I just find the place where things goes wrong. When the image is displayed with opengl, the rendered gl size does not have same aspect ratio as Raycasting image.
Do you program with Java/C/Unity? I'm curious because my device has problems with the camera data and you seem to capture it without problems. I am quite sure it's a bug but I would like to make sure it really is one.

How many lines can Qt draw on-screen?

I am currently working on a Qt application to draw maps. I am trying to draw 400,000+ lines and it crashes after using ~2GB but I still have memory left on my machine. I am wondering if I am hitting some limit inside of Qt that is causing the problem. Anyone know if there is a limit to the number of things you can draw or if you can change this limit?
If it is helpful, I am coding in C++ with a class that has a member function to draw the lines. The code is roughly as follows
QPointF fromPoint;
QPointF toPoint;
fromPoint = foo( x );
toPoint = foo( y );
m_Painter.drawLine(fromPoint, toPoint );
//m_Painter is a QPainter
Edit: Turns out the problem was somewhere else in the code. It had to do with the custom caching that was being done. Though I am still interested if there is a limit to how many lines Qt can draw. Does anyone know?
QPainter executes its underlying graphics through QPaintEngine, which has several implementations (like qpaintengine_mac.cpp, qpaintengine_x11.cpp, or qpaintengine_preview.cpp).
Some devices are raster...and are likely drawing each line into an image buffer and throwing away the endpoints after that drawing is done. There should be no limit to the number of lines you can draw in that case.
If the target device is OpenGL, or to a printer that is doing some kind of PostScript-like output, then the limitations of that particular paint engine may well be a factor. You'd have to look at the specific one.
For example: if you trace down the X11 implementation of drawLine you'll see it passes through to drawPolygon() down through strokePolygon_dev()...and bottoms out at a call to XDrawLines:
XDrawLines(dpy, hd, gc, pts, numberPoints, CoordModeOrigin);
So there you have another abstraction layer...and so the question becomes whether the XWindows display parameter is guaranteed to be raster. (My guess would be that it is.)
Anyway, so the answer is "unlimited if raster. may depend otherwise--but the limitations (if any) are probably coming from the underlying device for the paint engine, not Qt."

QT: QPainter.drawText() into QPixmap crashes under OSX Lion, not Leopard

Solved!
I cross compiled for Windows, and got my hint: The windows version crashed even before main -- so it had to be basic allocation issues. And it was. I had made a large static allocation and recently made it even larger (something the program requires, not optional or temporary); changed array[size] to array=calloc(etc,etc) and bingo, the windows version ran, and the crash deep in the bowels of OSX/Lion went away, everything runs fine again.
So, lesson learned: large static allocations, no good -- neither windows or OSX is particularly able to accommodate them.
I get a paint event. I have a QPixmap, standalone, that I will be drawing on a QWidget frame. Within the paint event call, I create a painter for the QPixmap, which lives in the class definition. I set colors; brushes, pens. I fill, I draw lines. rects. gradients. text. ellipses. it all works fine under Snow Leopard and Leopard. Under Lion, 10.7.anything, any drawText() call on this same QPixmap fails many call levels within OSX, and five levels deep in com.apple.ColorSync. Doesn't matter what font I use, or size. Both drawText() and drawStaticText() fail the same way.
The failure occurs prior to any attempt to actually draw the QPixmap -- it's during the render of the drawText() that it blows up. All I've done to that point is fill with black (works), fill with a gradient (works), draw some filled rects (works) and draw a grid (works) and then I go to draw this text. Which doesn't work, but blows out the main thread (0) (which is doing the drawing during the paint event) with EXC_BAD_ACCESS SIGSEGV.
Qt has no color management as far as I can tell. OSX has no way to turn off colorsync to the display.
For the moment, I've special cased the OS level and simply don't draw text (in the beta) if running under Lion but this is a horrible workaround.
Anyone have any ideas at all why Apple's 10.7 colorsync would get its knickers in a knot over drawText() to a perfectly vanilla QPixmap with valid size, text, rectangle, and within-bounds drawing task?

Resources