Get the camera angle of aframe - aframe

I have a project and I want it to run on a PC or a mobile terminal, but different devices have different initial visible areas of the camera. How can I get this angle? For example, camera.fov defaults to 80, but it may actually only be 50 on the mobile terminal. How can I accurately obtain my current visual range?

It's hard to know without a link to code, but here's a simple way to get FOV from an A-Frame scene.
For example, on this scene:
https://aframe.glitch.me/
You can open the console and type:
> AFRAME.scenes[0].camera.fov
And you'll get an answer such as:
> 80

Related

Clicking "Pictures" of a point cloud in PCL Library or Open3D

I have a point cloud and want to take "pictures" of it from various angles. Let us say I point my at the top at a certain angle and rotate the camera around the object at this particular orientation and "snap" what the camera is seeing.
Next I want to change the camera orientation and repeat the process.
I am completely new to the 3D data processing domain and not very well aware of all the PCL / open3D libraries. How can I code this functionalities.
Thanks in Advance.

Proximity Distortion in Depth Image

Description:
The goal of my current project is to determine the location of an "object" with just its 3D-coordinates.
To achieve that I figured it'd be best to turn off the "Fill"-Mode of my Camera (ZED 2 from Stereolabs), because I want some hard edges in my depth-image.
The Problem:
The depth image is being distorted to a major degree due to proximity of other "objects".
The following image shows the depth image from the side, it is viewing some bars before a smooth woodwall. The wall is mostly plain, so everything is fine here.
I blacked the Color-Image and Myself, do not worry about those parts.
When I put my hand or another object in front of the wood wall parts that are bigger than my actual hand get "pulled" towards the camera around the location of the hand or other object. These parts seem to "stick" to other elevated parts in the proximity, as the area between the bars and my arm gets pulled entirely.
Question(s):
Is this normal?
Is there an easy way to get rid of it?
What is the reason behind it?
My own assumption(s):
Feel like this is some sort of approximation of unknown parts
Hopefully.. Glad the camera was calibrated by default, as that usually is a pain to do right.
Due to the new object that gets put in front of the wall, there is more stuff hidden and therefore more areas that the camera cannot see with both lenses, maybe it just "guesses" that the area between is not so far off due to some underlying algorithms that make the image smoother..
First of all I would advice you to change the depth mode also with keeping the sensing mode in STANDARD:
ULTRA: offers the highest depth range and better preserves Z-accuracy along the sensing range.
QUALITY: has a strong filtering stage giving smooth surfaces.
PERFORMANCE: designed to be smooth, can miss some details.
*********************From your description, it seems like you are using the Performance mode
The ZED Camera uses a matching alogorithm to generate the disparity/depth map, which is a closed source and I have recently contacted stereolabs about that and they've said "We cannot disclose this information to you because it's internal information and proprietary to Stereolabs."
Other works on the zed camera showed some limitations in depth sensing, specially when there is a variation in lightning and shadows. """Depth Data Error Modeling of the ZED 3D Vision Sensor from
Stereolabs"""
In addition to this, the depth error is directly proportional to the distance of the object from the camera, so make sure to set your depth range properly.

Here SDK PositionSimulator - Issue, or surprising behaviour, if the signal is temporarily lost (in the file)

I'm using a GPX file which contains one point having the network source, the other points come from the gps satellite source. When playing the file and getting this 'network point', then the OnPositionChanged Listener is not triggered during 20 seconds (it should consider that the signal is still the same), I lose the next points and my app considers that the signal is lost during 20 seconds.
This behaviour comes up when playing the file without navigation, it also comes up when navigating, ...but it doesn't occur when the navigation has been launched and it is stopped. In this case, the position of the points after the network point is got normally without the 20s delay.
HERE Developer Support, could you please investigate?
Try please new 3.14 version there is fix some delays in PositioningManager. Some suggestion: if possible use please only LocationMethod.GPS during navigation and LocationMethod.GPS_NETWORK at all other cases. This suggestion works for PositionSimulator too.

What device/instrument/technology should I use for detecting object’s lying on a given surface?

First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.

How to properly save window state in QML

I read the Qt Documentations, I checked out a couple examples provided with the SDK, I build Qt Creator from source to see how the Qt devs do it...still no luck.
I am developing a cross platform application for Windows and Mac. On the Mac side I can try basically any of my solutions all of them work perfectly (I guess that is a thanks to the MacOS). On the other hand on Windows I always find some kind of bug or sketchy behavior.
Before I go into more details the root of my problems is supporting multiple monitor environments with monitors, which have different resolutions.
My two main solutions in a nutshell:
Since I am writing my application mainly in QML I use ApplicationWindow for my main window. I save the state of my ApplicationWindow in Settings. My code takes into consideration if the previously saved position is still valid (for example if the application was closed while it was on a monitor, which is no longer available)...the only reason why I have to do this because Windows would just open my app's window in "outer space" (Mac handles this automatically). My application (on Windows) gets into a really weird state if I close my application on one of the monitors and then I change the other monitors scaling factor and then I reopen my application. It opens up on the right monitor but it gets way over scaled and the UI elements are just weirdly floating. If I resize the window everything gets back to normal.
I exposed my QML ApplicationWindow to C++ put into a QWidget container, which then I attached to a QMainWindow by setting it as a setCentralWidget. With this method I have access to saveGeometry and restoreGeometry, which automatically takes care of multiple monitor positioning, but the scaling anomaly what I described in 1. still persist.
Did anybody solved this? Thanks in advance for any help and hin
#retif asked for me to provide a writeup when I commented I knew about these types of issues months ago.
TLDR
When dealing with an absolute positioning issue with a Qt Windows on the Windows OS - especially on Windows 10, it's best to be using system-DPI awareness. Some interpolation is required when going from Windows coordinate spaces (at different DPI awareness levels) to Qt coordinate spaces when you are trying to have the best scaling.
Here's what I did on my team's application.
The Problem:
It is really hard to do absolute positioning of a Qt Window when there are multiple monitors and multiple DPI resolutions to contend with.
Our application window "pops up" from a Windows task tray icon (or menu bar icon on Mac).
The original code would take the Windows screen coordinate position of the tray icon and use that as a reference to compute the positioning of the window.
At app startup, before Qt was initialized, we'd set the environment variable, QT_SCALE_FACTOR to be a floating point value of the (systemDPI/96.0). Sample code:
HDC hdc = GetDC(nullptr);
unsigned int dpi = ::GetDeviceCaps(hdc, LOGPIXELSX);
stringstream dpiScaleFactor;
dpiScaleFactor << (dpi / 96.0);
qputenv("QT_SCALE_FACTOR", QByteArray::fromStdString(dpiScaleFactor.str()));
The code above takes the primary monitors "DPI scale" and tells Qt to match it. It has the pleasant effect of letting Qt compute all scaling natively instead of a bitmap stretch like Windows would do in a non-DPI aware application.
Because we initialize Qt using the QT_SCALE_FACTOR environment variable (based on primary monitor DPI), we were using that value to scale the Windows coordinates when converting to Qt's QScreen coordinate space for this initial window placement.
Everything worked fine on single monitor scenarios. It even worked fine on multi-monitor scenarios as long as the DPI on both monitors was the same. But on configurations of multiple monitors with different DPIs, things got off. If the window had to pop-up on the non-primary monitor as a result of a screen change or a projector getting plugged in (or unplugged), weird stuff would happen. The Qt windows would appear in the wrong position. Or in some cases, the content inside the window would scale incorrectly. And when it did work, there would be a "too big" or "too small" problem of the windows scaled to one DPI when positioned on a similar sized monitor running at a different DPI.
My initial investigation revealed that Qt's coordinate space for the different QScreens geometries looked off. The coordinates of each QScreen rect was scaled based on the QT_SCALE_FACTOR, but the adjacent axis of the individual QScreen rects were not aligned. e.g. One QScreen rect might be {0,0,2559,1439}, but the monitor to the right would be at {3840,0,4920,1080}. What happened to the region where 2560 <= x < 3840 ? Because our code that scaled x and y based on QT_SCALE_FACTOR or DPI was relying on primary monitor being at (0,0) and all monitors to have adjacent coordinate spaces. If our code scaled the assumed position coordinate to something on the other monitor, it might get positioned in an odd place.
It took a while to realize this wasn't a Qt bug per se. It's just that Qt is just normalizing the Windows coordinate space that has these weird coordinate space gaps to begin with.
The fix:
The better fix is to tell Qt to scale to the primary monitor's DPI setting and run the process in system-aware DPI mode instead of per-monitor-aware DPI mode. This has the benefit of letting Qt scale the window correctly and without blurriness or pixelation on the primary monitor and to let Windows scale it on monitor changes.
A bit of background. Read everything in this section of High DPI programming on MSDN. A good read.
Here's what we did.
Kept the initialization of the QT_SCALE_FACTOR as described above.
Then we switched the initialization of our process and Qt from per-monitor DPI awareness to system-aware DPI. Th benefit of system-dpi is that it lets Windows auto-scale the application windows to the expected size as the monitors change out from underneath it. (All Windows APIs act as if all monitors have the same DPI). As discussed above, Windows is doing a bitmap stretch under the hood when the DPI is different from primary monitor. So there's a "blurriness issue" to contend with on monitor switching. But it's sure better than what it was doing before!
By default Qt will try to initialize the process to a per-monitor aware app. To force it to run in system-dpi awareness, call SetProcessDpiAwareness with a value of PROCESS_SYSTEM_DPI_AWARE very early in application startup before Qt initializes. Qt won't be able to change it after that.
Just switching to System-aware dpi fixed a bunch of other issues.
Final bug fix:
Because we position our window at an absolute position (directly above the systray icon in the task tray), we rely on the Windows API,Shell_NotifyIconGetRect to give us the coordinate of the systray. And once we know the offset of the systray, we calculate a top/left position to position for our window be on the screen. Let's call this position X1,Y1
However, the coordinates returned from Shell_NotifyIconGetRect on Windows 10 will always be the "per-monitor aware" native coordinates and not scaled to the System DPI. Use PhysicalToLogicalPointForPerMonitorDPI to convert. This API doesn't exist on Windows 7, but it's not needed. Use LoadLibrary and GetProcAddress for this API if you are supporting Windows 7. If the API doesn't exist, just skip this step. Use PhysicalToLogicalPointForPerMonitorDPI to convert X1,Y1 to a system-aware DPI coordinate wel'll call X2,Y2.
Ideally, X2,Y2 is passed to Qt methods like QQuickView::setPosition But....
Because we were using the QT_SCALE_FACTOR environment variable to get the application to scale the primary monitor DPI, all the QScreen geometries would have normalized coordinates that were different from what Windows used as the screen coordinate system. So the final windows position coordinate of X2,Y2 computed above would not map to the expected position in Qt coordinates if the QT_SCALE_FACTOR environment var was anything but 1.0
Final fix to get the final top/left position of the Qt window calculated.
Call EnumDisplayMonitors and enumerate the monitors list. Find the monitor in which X2,Y2 discussed above is positioned on. Save off the MONITORINFOEX.szDevice as well as the MONITORINFOEX.rcMonitor geometry in a variable called rect
Call QGuiApplication::screens() and enumerate these objects to find the QScreen instance whose name() property matches the MONITORINFOEX.szDevice in the previous step. And then save off the QRect returned by the geometry() method of this QScreen into a variable called qRect. Save the QScreen into a pointer variable called pScreen
Final step of converting X2,Y2 to XFinal,YFinal is this algorithm:
XFinal = (X2 - rect.left) * qRect.width
------------------------------- + qRect.left
rect.width
YFinal = (Y2 - rect.top) * qRect.height
------------------------------- + qRect.top
rect.height
This is just basic interpolation between screen coordinate mappings.
Then the final window positioning is to set both the QScreen and XFinal,YFinal positions on the view object of Qt.
QPoint position(XFinal, YFinal);
pView->setScreen(pScreen);
pView->setPosition(position);
Other solutions considered:
There is the Qt mode called Qt::AA_EnableHighDpiScaling that can be set on the QGuiApplication object. It does much of the above for you, except it forces all scaling to be an integral scale factor (1x, 2x, 3x, etc... never 1.5 or 1.75). That didn't work for us because a 2x scaling of the window on a DPI setting of 150% looked too big.

Resources