For a GPS app, I would like to set the interval for the position update, or simply know the frequency which is used. How to do with HERE SDK?
I ask this question because I would like to manage some operations in the OnPositionChangedListener function. So, that will depend on the interval. Maybe I'll need to use a timer in order to have more control.
In comparison, UWP apps manage that with the properties Geolocator.ReportInterval and Geolocator.DesiredAccuracy.
wait() method can be used to set or specify the time interval at which the density of coordinates need to be captured.
public abstract static class NavigationManager.GpsSignalListener
extends java.lang.Object
The accuracy of the GPS coordinates depends on many factors, with the result that the computed latitude and longitude can be from 0.5 meters to up to 40 meters away from the actual position. It is often impossible to determine a GPS position inside tunnels, inside buildings or in urban canyons. GPS receivers cannot provide a maximum deviation and thus, for example, are unable to indicate that the calculated location lies within 3.5 meters of the actual location with 95% probability. GPS receivers only provide an HDOP/VDOP/PDOP... value (horizontal/vertical dilution of precision) which indicates that the computed coordinates of a location cannot be more accurate than this value considering the number and/or position of satellites and the mathematical algorithms. This minimum error cannot be used to estimate the maximum error.
GPS heading and speed are computed by the receiver device based on the last several sets of GPS coordinates. The accuracy of the calculation depends on the actual speed of the vehicle and becomes unreliable if the vehicle speed drops below ~10 km/h. At low speeds, even the positional accuracy declines considerably, resulting in large random point clouds in some situations.
Please refer for more details below :
https://developer.here.com/documentation/android-premium/api_reference_java/index.html
Related
Maybe somebody has some links about determining distance where the scanners are more than 15 meters apart?
All the articles I found describe how to determine the distance, but for example in room 5/5 with 4 scanners or even closer together. I want to know what accuracy can I have when scanners are they are further apart.
Unfortunately beacon distance estimates are wildly inaccurate at distances over a few meters. It is nearly impossible to discern between beacons 15 meters and 25 meters away.
Why? Because radio signals get weaker with distance, and at distances that near the maximum range of Bluetooth beacons (typically 20-40 meters in home/school/office indoor environments with walls and instructions) the signal level nears the noise floor. There is very little absolute difference between the signal level of a beacon at 25 meters from one at 15 meters. By looking for such tiny differences in signal you are mostly seeing error and non-distance influences on the signal measurement.
The best you can do with a distance calculation is determine that it is likely the beacon is far away (e.g. > 10 meters). One you know it is far away determining how far is beyond the limits of the technology.
I'm very new to PCL.
I try to detect the floor under an object for checking if the object topples or is it positioned horizontally.
I've checked API and found the method: pcl::PointCloud< T >::at.
Seems like I could detect Z-value of a point using at. Is it correct?
If yes, I'm confused, how it should work. Mathematically a point is infinite small. On my scans I see the point-density the smaller the more distinct they are in Z-direction.
Will at always return a point? Is the value the mean of nearest physical points?
As referenced in the documentation, pcl::PointCloud< T >::at returns the information of a single point (the coordinates plus other data depending on the point format) given column and row information (roughly the X,Y in the depth image). For this reason, this method just works on organized clouds.
Unfortunately, not every point is a valid point. Unless you filter the point cloud, you could find invalid measurements (points which have NaN components). This is pretty normal, just discard those points using a filter. Your intuition is right, the point density is smaller the further away you go from the sensor.
As for what you're trying to achieve, you should take a look at the planar segmentation tutorial on the PCL website and at the Table Object Detector software by Nicolas Burrus. The latter extracts a plane, and the clusters of objects on top of it.
I want to have an estimation of the location of a user using the surrounding cell towers. For each tower, I have a location and a signal strength. Now I use a simple means of the coordinates but it is not very accurate (the user is not necessarily between the two towers).
I guess the solution is to draw a circle around each tower (the less the signal strength is, the larger it will be) and them compute the intersection between the circles. I usually don't have more than 3 cell towers.
Any idea how ? I found the Delaunay triangulation method but I don't think it applies here.
Thank you
You need to convert each signal strength to an estimate of distance and then use each distance (as the radius of a circle) in order to triangulate. You'll need at least three transmitters to resolve ambiguity, and accuracy will not be great, since signal strength is only very approximately related to distance and is affected by numerous external factors in the real world. Note that in ideal conditions, signal strength follows an inverse square law with distance.
Let's say I have a set of 5 markers. I am trying to find the relative distances between each marker using an augmented reality framework such as ARToolkit. In my camera feed thee first 20 frames show me the first 2 markers only so I can work out the transformation between the 2 markers. The second 20 frames show me the 2nd and 3rd markers only and so on. The last 20 frames show me the 5th and 1st markers. I want to build up a 3D map of the marker positions of all 5 markers.
My question is, knowing that there will be inaccuracies with the distances due to low quality of the video feed, how do I minimise the inaccuracies given all the information I have gathered?
My naive approach would be to use the first marker as a base point, from the first 20 frames take the mean of the transformations and place the 2nd marker and so forth for the 3rd and 4th. For the 5th marker place it inbetween the 4th and 1st by placing it in the middle of the mean of the transformations between the 5th and 1st and the 4th and 5th. This approach I feel has a bias towards the first marker placement though and doesn't take into account the camera seeing more than 2 markers per frame.
Ultimately I want my system to be able to work out the map of x number of markers. In any given frame up to x markers can appear and there are non-systemic errors due to the image quality.
Any help regarding the correct approach to this problem would be greatly appreciated.
Edit:
More information regarding the problem:
Lets say the realworld map is as follows:
Lets say I get 100 readings for each of the transformations between the points as represented by the arrows in the image. The real values are written above the arrows.
The values I obtain have some error (assumed to follow a gaussian distribution about the actual value). For instance one of the readings obtained for marker 1 to 2 could be x:9.8 y:0.09. Given I have all these readings how do I estimate the map. The result should ideally be as close to the real values as possible.
My naive approach has the following problem. If the average of the transforms from 1 to 2 is slightly off the placement of 3 can be off even though the reading of 2 to 3 is very accurate. This problem is shown below:
The greens are the actual values, the blacks are the calculated values. The average transform of 1 to 2 is x:10 y:2.
You can use a least-squares method, to find the transformation that gives the best fit to all your data. If all you want is the distance between the markers, this is just the average of the distances measured.
Assuming that your marker positions are fixed (e.g., to a fixed rigid body), and you want their relative position, then you can simply record their positions and average them. If there is a potential for confusing one marker with another, you can track them from frame to frame, and use the continuity of each marker location between its two periods to confirm its identity.
If you expect your rigid body to be moving (or if the body is not rigid, and so forth), then your problem is significantly harder. Two markers at a time is not sufficient to fix the position of a rigid body (which requires three). However, note that, at each transition, you have the location of the old marker, the new marker, and the continuous marker, at almost the same time. If you already have an expected location on the body for each of your markers, this should provide a good estimate of a rigid pose every 20 frames.
In general, if your body is moving, best performance will require some kind of model for its dynamics, which should be used to track its pose over time. Given a dynamic model, you can use a Kalman filter to do the tracking; Kalman filters are well-adapted to integrating the kind of data you describe.
By including the locations of your markers as part of the Kalman state vector, you may be able to be able to deduce their relative locations from purely sensor data (which appears to be your goal), rather than requiring this information a priori. If you want to be able to handle an arbitrary number of markers efficiently, you may need to come up with some clever mutation of the usual methods; your problem seems designed to avoid solution by conventional decomposition methods such as sequential Kalman filtering.
Edit, as per the comments below:
If your markers yield a full 3D pose (instead of just a 3D position), the additional data will make it easier to maintain accurate information about the object you are tracking. However, the recommendations above still apply:
If the labeled body is fixed, use a least-squares fit of all relevant frame data.
If the labeled body is moving, model its dynamics and use a Kalman filter.
New points that come to mind:
Trying to manage a chain of relative transformations may not be the best way to approach the problem; as you note, it is prone to accumulated error. However, it is not necessarily a bad way, either, as long as you can implement the necessary math in that framework.
In particular, a least-squares fit should work perfectly well with a chain or ring of relative poses.
In any case, for either a least-squares fit or for Kalman filter tracking, a good estimate of the uncertainty of your measurements will improve performance.
I would like to create a digital (square) signal on my sound card. It works great if I generate high frequencies. But, since I can't output DC on a sound card, for lower frequencies the resulting digital bits will all slowly fade to 0.
This is what the soundcards high pass does to my square wave:
http://www.electronics-tutorials.ws/filter/fil39.gif
What's the mathematical function of a signal, that, when passed through a high pass will become square?
Ideally, the solution is demonstrated in gnuplot.
The sound card cuts out the low frequencies in the waveform, so you need to boost those by some amount in what you pass to it.
A square wave contains many frequencies (see the section on the Fourier series here). I suspect the easiest method of generating a corrected square wave is to sum a Fourier series, boosting the amplitudes of the low frequency components to compensate for the high-pass filter in the sound card.
In order to work out how much to boost each low frequency component, you will first need to measure the response of the high-pass filter in your soundcard, by outputting sine waves of various frequencies but constant amplitude, and measuring for each frequency the ratio r(f) of the amplitude of the output to the amplitude of the input. Then, an approximation to a square wave output can be generated by multiplying the amplitude of each frequency component f in the square wave fourier series by 1/r(f) (the 'inverse filter').
It's possible that the high-pass filter in the soundcard also adjusts the phase of the signal. In this case, one might be better off modelling the high pass as an RC filter, (which is probably how the soundcard is doing the filtering), and invert both the amplitude and phase response from that.
Some of the previous answers have correctly noted that it is the high-pass filter (AC coupling capacitor on the soundcard's output) is what is preventing the low frequency square waves from "staying on" so they decay quickly.
There is no way to completely beat this filter from software or it wouldn't be there, now would it? If you can live with lower amplitude square waves at the lower frequencies, you can approximate them by sending out something like a triangle wave. From a transient analysis perspective, the theory of operation here is that as the coupling capacitor is discharging (blocking DC) you are increasing its bias voltage to counteract that discharge thus maintaining the square wave's plateau for a while. Of course you eventually run out of PCM headroom (you can't keep increasing the voltage indefinitely), so a 24-bit card is better in this respect than a 16-bit one as it will give you more resolution. Another, more abstract way to think of this is that the RC filter works as a differentiator, so in order to get the flat peaks of the square wave you need to give it the flat slopes of the triangle wave at the input. But this is an idealized behavior.
As quick proof of concept, here's what a 60Hz ±1V triangle signal becomes when passing through a 1uF coupling cap on a 1Kohm load; it approximates a ±200mV square wave
Note that the impedance/resistance of the load matters quite a bit here; if you lower it to, say, 100ohm the output amplitude decrease dramatically. This is how the coupling caps block DC on speakers/headphone because these devices have much lower impedance than 1Kohm.
If I can find a bit more time later today, I'll add a better simulation, with a better shaped stimulus instead of the simple triangle wave, but I can't get that from your average web-based circuit simulator software...
Well, if you're lucky you can get one of those $0.99 USB sound cards where the manufacturer has cut corners so much that they didn't install coupling caps. https://www.youtube.com/watch?v=4GNRzwfP7RE
Unfourtunately, you cannot get a good approximation of a square wave. Sound hardware is intentionally slew rate limited and would not be able to produce a falling or rising edge beyond its intended frequency range.
You can approximate a badly deformed square wave by alternating a high and low PCM code (+max, -max) every N samples.
You can't actually produce a true square wave, because it has infinite bandwidth. You can produce a reasonable approximation of a square wave though, at frequencies between say 10 Hz and 1 kHz (below 10 Hz you may have problems with the analogue part of your sound card etc, and above around 1 kHz the approximation will become increasingly inaccurate, since you can only reproduce a relatively small number of harmonics).
Tp generate the waveform the sample values will just alternate between +/- some value, e.g. full scale, which would be -32767 and +32767 for a 16 bit PCM stream. The frequency will be determined by the period of these samples. E.g. for a 44.1 kHz sample rate, if you have say 100 samples of -32767 and then 100 samples of +32767, i.e. period = 200 samples, then the fundamental frequency of your square wave will be 44.1 kHz / 200 = 220 Hz.
I found an application that I build on it.
http://www.blogger.com/blogger.g?blogID=999906212197085612#editor/target=post;postID=7722571737880350755
you can generate the format you want and even the pattern you need.
The code uses SLIMDX.