Moving points to a regular grid - grid
I need to evenly distribute clumped 3D data. 2D solutions would be terrific. Up to many millions of data points.
I am looking for the best method to evenly distribute [ie fully populate a correctly sized grid] clumped 3D or 2D data.
Sorting in numerous directions numerous times, with a shake to separate clumps a little now & again, is the method currently used. It is known that it is far from optimum. In general sorting is no good because it spreads/flattens clumps of points across a single surface.
Triangulation would seemingly be best [de-warp back to a regular grid] however I could never get the proper hull and had other problems.
Pressure equalization type methods seem over the top.
Can anybody point me in the direction of information on this?
Thanks for your time.
Currently used [inadequate] code
1 - allocates indexes for sorting in various directions [side to side, then on diagonals],
2 - performs the sorts independently;
3 - allocates 2D locations from the sorts;
4 - averages the locations obtained from the different sorts;
5 - shakes [attempted side to side & up/down movement of whole dataset leaving duplicates static] to declump;
6 - repeat as required up to 11 times.
I presume the "best" result would be the minimum total movement from original locations to final grided locations.
Related
A large amount of points to create separate polygons (ArcGIS/QGIS)
Visual example of the data I used a drone to create a DOF of a small area. During the flight, it takes a photo every 20sh seconds (40sh meters of a flight). I have created a CSV file, which I transferred to a point shapefile. In total, I made with drone 10 so-called "missions", each with 100-200 points which are "shaped" as squares on the map. What I want now is to create a polygon shapefile from the point shapefile. Because those points sometimes overlap, I cannot use the "Aggregate Points" task, as it's only distance-based. I want to make polygons automatically, using some kind of script. What could help is the fact that a maximum time between two points (AKA photos taken) is 10-20 seconds, so if the time distance is over 3 minutes, it's another "mission". Can you help with such a script, that would quickly and automatically create as many polygons as there are missions?
Okay, I think I understand what you are trying to accomplish. Since no one replied I am going to give it a quick shot, so you have something to try. I think the best strategy would be to: Clustering algorithm: Try running a Clustering algorithm such as DBSCAN around the timestamp dimension to classify them based on time groups, instead of the distance (since, as you said, distance based separation is not enough to properly identify and separate the points). After which, you should have all the points classified between different groups with a column group id. Maximum distance parameter in the algorithm should be around 20 seconds steps, or even a minute (since you said each mission was separated at least about 3 minutes apart). Feature based Polygon to point: At that point, then you run your generic Polygon_from_points(...) function that transforms these clustered points to polygons shapes based on a specific discriminant feature (which in your case is going to be each group id). How does this work?: This would properly separate the groups first (time-based) and then you should be able to find a generic point to polygon based on a feature (Arcgis should have some). I dont have an example dataset, nor any code written, but based on what you described I think it would work, hope it helps.
Colocalization in R / Cross-Correlation of 3D matrices
I hope this has not been asked before, but I am currently in the process of analyzing some microscopy pictures in R and I am not quite sure how to tackle this. The situation is as follows: - I have several pictures of different targets in cells which show spots of signal - Some pictures show the same cells, but were aquired after others and are therefore a little "off" in x-, y- and z-direction - Some, but by no means all of the pictures show colocalization = spots from one picture also show up on other pictures Coming from the spot detection software, I now have data frames for all spots in each picture (one df per picture) with the x-, y- and z-coordinates. I am now looking for a) a way to align these matrices of spots from the different colors and thought that cross-correlation of the matrices might be a way to go (however, is there CC in 3D in R?) b) a way to calculate the colocalization. As these are pictures and intrinsically noisy, even colocalized spots might have a little different coordinates. Is there a function of package in R which merges these data based on a threshold or other parameter of my choice? Thanks a lot in advance for all your answers!! Simon
Adding plotstick-like arrows to a scatterplot
This is my first post here, thought i have read a lot of your Q&A these last 6 months. I'm currently working on ADCP (Aquatic Doppler Current Profiler) data, handled by the "oce" package from Dan Kelley (a little bit of advertising for those who want to deal with oceanographic datas in R). I'm not very experienced in R, and i have read the question relative to abline for levelplot functions "How to add lines to a levelplot made using lattice (abline somehow not working)?". What i currently have is a levelplot representing a time series of echo intensity (from backscattered signal, which is monitored in the same time as current is) data taken in 10m of depth, this 10m depth line is parted into 25 rows, where each measurement is done along the line. (see the code part to obtain an image of what i have) (unfortunately, my reputation doesn't allow me to post images). I then proceed to generate an other plot, which represents arrows of the current direction as: The length of each arrow gives an indication of the current strength Its orientation is represented (all of this is done by taking the two components of the current intensity (East-West / North-South) and represent the resulting current). There is an arrow drawn for each tick of time (thus for the 1000 columns of my example data, there are always two components of the current intensity). Those arrows are drawn at the beginning of each measurement cell, thus at each row of my data, allowing to have a representation of currents for the whole water column. You can see the code part to have a "as i have" representation of currents The purpose of this question is to understand how i can superimpose those two representations, drawing my current arrows at each row of the represented data, thus making a representation of both current direction, intensity and echo intensity. Here i can't find any link to describe what i mean, but this is something i have already seen. I tried with the panel function which seems to be the best option, but my knowledge of R and the handeling of this kind of work is small, and i hope one of you may have the time and the knowledges to help me to solve this problem way faster than i could. I am, of course, available to answer any questions or give precisions. I may ask a lot more, after working on a large code for 6 months, my thirst for learning is now large. Code to represent data : Here are some data to represent what I have: U (north/south component of velocity) and V (East/west): U1= c(0.043,0.042,0.043,0.026,0.066,-0.017,-0.014,-0.019,0.024,-0.007,0.000,-0.048,-0.057,-0.101,-0.063,-0.114,-0.132,-0.103,-0.080,-0.098,-0.123,-0.087,-0.071,-0.050,-0.095,-0.047,-0.031,-0.028,-0.015,0.014,-0.019,0.048,0.026,0.039,0.084,0.036,0.071,0.055,0.019,0.059,0.038,0.040,0.013,0.044,0.078,0.040,0.098,0.015,-0.009,0.013,0.038,0.013,0.039,-0.008,0.024,-0.004,0.046,-0.004,-0.079,-0.032,-0.023,-0.015,-0.001,-0.028,-0.030,-0.054,-0.071,-0.046,-0.029,0.012,0.016,0.049,-0.020,0.012,0.016,-0.021,0.017,0.013,-0.008,0.057,0.028,0.056,0.114,0.073,0.078,0.133,0.056,0.057,0.096,0.061,0.096,0.081,0.100,0.092,0.057,0.028,0.055,0.025,0.082,0.087,0.070,-0.010,0.024,-0.025,0.018,0.016,0.007,0.020,-0.031,-0.045,-0.009,-0.060,-0.074,-0.072,-0.082,-0.100,-0.047,-0.089,-0.074,-0.070,-0.070,-0.070,-0.075,-0.070,-0.055,-0.078,-0.039,-0.050,-0.049,0.024,-0.026,-0.021,0.008,-0.026,-0.018,0.002,-0.009,-0.025,0.029,-0.040,-0.006,0.055,0.018,-0.035,-0.011,-0.026,-0.014,-0.006,-0.021,-0.031,-0.030,-0.056,-0.034,-0.026,-0.041,-0.107,-0.069,-0.082,-0.091,-0.096,-0.043,-0.038,-0.056,-0.068,-0.064,-0.042,-0.064,-0.058,0.016,-0.041,0.018,-0.008,0.058,0.006,0.007,0.060,0.011,0.050,-0.028,0.023,0.015,0.083,0.106,0.057,0.096,0.055,0.119,0.145,0.078,0.090,0.110,0.087,0.098,0.092,0.050,0.068,0.042,0.059,0.030,-0.005,-0.005,-0.013,-0.013,-0.016,0.008,-0.045,-0.021,-0.036,0.020,-0.018,-0.032,-0.038,0.021,-0.077,0.003,-0.010,-0.001,-0.024,-0.020,-0.022,-0.029,-0.053,-0.022,-0.007,-0.073,0.013,0.018,0.002,-0.038,0.024,0.025,0.033,0.008,0.016,-0.018,0.023,-0.001,-0.010,0.006,0.053,0.004,0.001,-0.003,0.009,0.019,0.024,0.031,0.024,0.009,-0.009,-0.035,-0.030,-0.031,-0.094,-0.006,-0.052,-0.061,-0.104,-0.098,-0.054,-0.161,-0.110,-0.078,-0.178,-0.052,-0.073,-0.051,-0.065,-0.029,-0.012,-0.053,-0.070,-0.040,-0.056,-0.004,-0.032,-0.065,-0.005,0.036,0.023,0.043,0.078,0.039,0.019,0.061,0.025,0.036,0.036,0.062,0.048,0.073,0.037,0.025,0.000,-0.007,-0.014,-0.050,-0.014,0.007,-0.035,-0.115,-0.039,-0.113,-0.102,-0.109,-0.158,-0.158,-0.133,-0.110,-0.170,-0.124,-0.115,-0.134,-0.097,-0.106,-0.155,-0.168,-0.038,-0.040,-0.074,-0.011,-0.040,-0.003,-0.019,-0.022,-0.006,-0.049,-0.048,-0.039,-0.011,-0.036,-0.001,-0.018,-0.037,-0.001,0.033,0.061,0.054,0.005,0.040,0.045,0.062,0.016,-0.007,-0.005,0.009,0.044,0.029,-0.016,-0.028,-0.021,-0.036,-0.072,-0.138,-0.060,-0.109,-0.064,-0.142,-0.081,-0.032,-0.077,-0.058,-0.035,-0.039,-0.013,0.007,0.007,-0.052,0.024,0.018,0.067,0.015,-0.002,-0.004,0.038,-0.010,0.056) V1=c(-0.083,-0.089,-0.042,-0.071,-0.043,-0.026,0.025,0.059,-0.019,0.107,0.049,0.089,0.094,0.090,0.120,0.169,0.173,0.159,0.141,0.157,0.115,0.128,0.154,0.083,0.038,0.081,0.129,0.120,0.112,0.074,0.022,-0.022,-0.028,-0.048,-0.027,-0.056,-0.027,-0.107,-0.020,-0.063,-0.069,-0.019,-0.055,-0.071,-0.027,-0.034,-0.018,-0.089,-0.068,-0.129,-0.034,-0.002,0.011,-0.009,-0.038,-0.013,-0.006,0.027,0.037,0.022,0.087,0.080,0.119,0.085,0.076,0.072,0.029,0.103,0.019,0.020,0.052,0.024,-0.051,-0.024,-0.008,0.011,-0.019,0.023,-0.011,-0.033,-0.101,-0.157,-0.094,-0.099,-0.106,-0.103,-0.139,-0.093,-0.098,-0.083,-0.118,-0.142,-0.155,-0.095,-0.122,-0.072,-0.034,-0.047,-0.036,0.014,0.035,-0.034,-0.012,0.054,0.030,0.060,0.091,0.013,0.049,0.083,0.070,0.127,0.048,0.118,0.123,0.099,0.097,0.074,0.125,0.051,0.107,0.069,0.040,0.102,0.100,0.119,0.087,0.077,0.044,0.091,0.020,0.010,-0.028,0.026,-0.018,-0.020,0.010,0.034,0.005,0.010,0.028,-0.043,0.025,-0.069,-0.003,0.004,-0.001,0.024,0.032,0.076,0.033,0.071,0.000,0.052,0.034,0.058,0.002,0.070,0.025,0.056,0.051,0.080,0.051,0.101,0.009,0.052,0.079,0.035,0.051,0.049,0.064,0.004,0.011,0.005,0.031,-0.021,-0.024,-0.048,-0.011,-0.072,-0.034,-0.020,-0.052,-0.069,-0.088,-0.093,-0.084,-0.143,-0.103,-0.110,-0.124,-0.175,-0.083,-0.117,-0.090,-0.090,-0.040,-0.068,-0.082,-0.082,-0.061,-0.013,-0.029,-0.032,-0.046,-0.031,-0.048,-0.028,-0.034,-0.012,0.006,-0.062,-0.043,0.010,0.036,0.050,0.030,0.084,0.027,0.074,0.082,0.087,0.079,0.031,0.003,0.001,0.038,0.002,-0.038,0.003,0.023,-0.011,0.013,0.003,-0.046,-0.021,-0.050,-0.063,-0.068,-0.085,-0.051,-0.052,-0.065,0.014,-0.016,-0.082,-0.026,-0.032,0.019,-0.026,0.036,-0.005,0.092,0.070,0.045,0.074,0.091,0.122,-0.007,0.094,0.064,0.087,0.063,0.083,0.109,0.062,0.096,0.036,-0.019,0.075,0.052,0.025,0.031,0.078,0.044,-0.018,-0.040,-0.039,-0.140,-0.037,-0.095,-0.056,-0.044,-0.039,-0.086,-0.062,-0.085,-0.023,-0.103,-0.035,-0.067,-0.096,-0.097,-0.060,0.003,-0.051,0.014,-0.002,0.054,0.045,0.073,0.080,0.096,0.104,0.126,0.144,0.136,0.132,0.160,0.155,0.136,0.080,0.144,0.087,0.093,0.103,0.151,0.165,0.146,0.159,0.156,0.002,0.023,-0.019,0.078,0.031,0.038,0.019,0.094,0.018,0.028,0.064,-0.052,-0.034,0.000,-0.074,-0.076,-0.028,-0.048,-0.025,-0.095,-0.098,-0.045,-0.016,-0.030,-0.036,-0.012,0.023,0.038,0.042,0.039,0.073,0.066,0.027,0.016,0.093,0.129,0.138,0.121,0.077,0.046,0.067,0.068,0.023,0.062,0.038,-0.007,0.055,0.006,-0.015,0.008,0.064,0.012,0.004,-0.055,0.018,0.042) U2=c(0.022,0.005,-0.022,0.025,-0.014,-0.020,-0.001,-0.021,-0.008,-0.006,-0.056,0.050,-0.068,0.018,-0.106,-0.053,-0.084,-0.082,-0.061,-0.041,-0.057,-0.123,-0.060,-0.029,-0.084,-0.004,0.030,-0.021,-0.036,-0.016,0.006,0.088,0.088,0.079,0.063,0.097,0.020,-0.048,0.046,0.057,0.065,0.042,0.022,0.016,0.041,0.109,0.024,-0.010,-0.084,-0.002,0.004,-0.033,-0.025,-0.020,-0.061,-0.060,-0.043,-0.027,-0.054,-0.054,-0.040,-0.077,-0.043,-0.014,0.030,-0.051,0.001,-0.029,0.008,-0.023,0.015,0.002,-0.001,0.029,0.048,0.081,-0.022,0.040,0.018,0.131,0.059,0.055,0.043,0.027,0.091,0.104,0.101,0.084,0.048,0.057,0.044,0.083,0.063,0.083,0.079,0.042,-0.021,0.017,0.005,0.001,-0.033,0.010,-0.028,-0.035,-0.012,-0.034,-0.055,-0.009,0.001,-0.084,-0.047,-0.020,-0.046,-0.042,-0.058,-0.071,0.013,-0.045,-0.070,0.000,-0.067,-0.090,0.012,-0.013,-0.013,-0.009,-0.063,-0.047,-0.030,0.046,0.026,0.019,0.007,-0.056,-0.062,0.009,-0.019,-0.005,0.003,0.022,-0.006,-0.019,0.020,0.025,0.040,-0.032,0.015,0.019,-0.014,-0.031,-0.047,0.010,-0.058,-0.079,-0.052,-0.044,0.012,-0.039,-0.007,-0.068,-0.095,-0.053,-0.066,-0.056,-0.033,-0.006,0.001,0.010,0.004,0.011,0.013,0.029,-0.011,0.007,0.023,0.087,0.054,0.040,0.013,-0.006,0.076,0.086,0.103,0.121,0.070,0.074,0.067,0.045,0.088,0.041,0.075,0.039,0.043,0.016,0.065,0.056,0.047,-0.002,-0.001,-0.009,-0.029,0.018,0.041,0.002,-0.022,0.003,0.008,0.031,0.003,-0.031,-0.015,0.014,-0.057,-0.043,-0.045,-0.067,-0.040,-0.013,-0.111,-0.067,-0.055,-0.004,-0.070,-0.019,0.009,0.009,0.032,-0.021,0.023,0.123,-0.032,0.040,0.012,0.042,0.038,0.037,-0.007,0.003,0.011,0.090,0.039,0.083,0.023,0.056,0.030,0.042,0.030,-0.046,-0.034,-0.021,-0.076,-0.017,-0.071,-0.053,-0.014,-0.060,-0.038,-0.076,-0.011,-0.005,-0.051,-0.043,-0.032,-0.014,-0.038,-0.081,-0.021,-0.035,0.014,-0.001,0.001,0.003,-0.029,-0.031,0.000,0.048,-0.036,0.034,0.054,0.001,0.046,0.006,0.039,0.015,0.012,0.034,0.022,0.015,0.033,0.037,0.012,0.057,0.001,-0.014,0.012,-0.007,-0.022,-0.002,-0.008,0.043,-0.041,-0.057,-0.006,-0.079,-0.070,-0.038,-0.040,-0.073,-0.045,-0.101,-0.092,-0.046,-0.047,-0.023,-0.028,-0.019,-0.086,-0.047,-0.038,-0.068,-0.017,0.037,-0.010,-0.016,0.010,-0.005,-0.031,0.004,-0.034,0.005,0.006,-0.015,0.017,-0.043,-0.007,-0.009,0.013,0.026,-0.036,0.011,0.047,-0.025,-0.023,0.043,-0.020,-0.003,-0.043,0.000,-0.018,-0.075,-0.045,-0.063,-0.043,-0.055,0.007,-0.063,-0.085,-0.031,0.005,-0.067,-0.059,-0.059,-0.029,-0.014,-0.040,-0.072,-0.018,0.039,-0.006,-0.001,-0.015,0.038,0.038,-0.009,0.026,0.017,0.056) V2=c(-0.014,0.001,0.004,-0.002,0.022,0.019,0.023,-0.023,0.030,-0.085,-0.007,-0.027,0.100,0.058,0.108,0.055,0.132,0.115,0.084,0.046,0.102,0.121,0.036,0.019,0.066,0.049,-0.011,0.020,0.023,0.011,0.041,0.009,-0.009,-0.023,-0.036,0.031,0.012,0.026,-0.011,0.009,-0.027,-0.033,-0.054,-0.004,-0.040,-0.048,-0.009,0.023,-0.028,0.022,0.090,0.060,0.040,0.003,-0.011,0.030,0.107,0.025,0.084,0.036,0.074,0.065,0.078,0.011,0.058,0.092,0.083,0.080,0.039,0.000,-0.027,0.035,0.011,0.004,0.023,-0.033,-0.060,-0.049,-0.101,-0.033,-0.105,-0.042,-0.088,-0.086,-0.093,-0.085,-0.028,-0.046,-0.045,-0.052,-0.009,-0.066,-0.073,-0.067,0.011,-0.057,-0.087,-0.066,-0.103,-0.075,0.003,-0.021,0.010,-0.013,0.021,0.020,0.084,0.028,0.127,0.050,0.104,0.097,0.075,0.021,0.057,0.095,0.080,0.077,0.086,0.110,0.054,0.016,0.105,0.065,0.046,0.047,0.072,0.058,0.092,0.063,0.033,0.087,0.036,0.049,0.093,0.008,0.064,0.068,0.040,0.049,0.035,0.042,0.045,0.021,0.056,0.007,0.026,0.067,0.046,0.088,0.084,0.070,0.037,0.079,0.065,0.074,0.077,0.023,0.094,0.061,0.096,0.068,0.067,0.091,0.061,0.069,0.090,0.046,0.057,0.011,-0.018,0.005,0.001,-0.023,-0.087,0.010,0.023,-0.025,-0.040,-0.059,-0.063,-0.075,-0.136,-0.078,-0.102,-0.128,-0.116,-0.091,-0.136,-0.083,-0.115,-0.063,-0.055,-0.080,-0.093,-0.099,-0.053,-0.042,-0.011,-0.034,-0.027,-0.042,-0.022,-0.008,-0.033,-0.039,-0.036,0.019,0.036,-0.002,0.000,-0.021,0.060,0.030,0.073,0.080,0.061,0.046,0.062,0.010,0.034,0.103,0.107,0.016,0.080,0.067,0.007,0.060,0.021,-0.026,0.008,0.051,0.030,0.001,-0.036,-0.047,0.000,0.006,0.006,0.013,0.009,0.019,0.009,-0.086,-0.020,0.018,0.039,0.014,0.011,0.052,0.031,0.095,0.047,0.065,0.114,0.086,0.102,0.037,0.039,0.060,0.024,0.091,0.058,0.065,0.060,0.045,0.031,0.062,0.047,0.043,0.057,0.032,0.057,0.051,0.019,0.056,0.024,-0.003,0.023,-0.013,-0.032,-0.022,-0.064,-0.021,-0.050,-0.063,-0.090,-0.082,-0.076,-0.077,-0.042,-0.060,-0.010,-0.060,-0.069,-0.028,-0.071,-0.046,-0.020,-0.074,0.080,0.071,0.065,0.079,0.065,0.039,0.061,0.154,0.072,0.067,0.133,0.106,0.080,0.047,0.053,0.110,0.080,0.122,0.075,0.052,0.034,0.081,0.118,0.079,0.101,0.053,0.082,0.036,0.033,0.026,0.002,-0.002,0.020,0.087,0.021,0.034,0.003,-0.021,0.016,-0.009,-0.045,-0.043,-0.020,0.027,0.008,-0.006,0.043,0.045,0.014,0.053,0.083,0.113,0.091,0.028,0.060,0.040,0.019,0.114,0.126,0.090,0.046,0.089,0.029,0.030,0.010,0.045,0.040,0.072,-0.033,-0.008,0.014,-0.018,-0.004,-0.037,0.015,-0.021,-0.015) bindistances=c(1.37,1.62,1.87,2.12,2.37,2.62,2.87,3.12,3.37,3.62,3.87,4.12,4.37,4.62,4.87,5.12,5.37,5.62,5.87,6.12,6.37,6.62,6.87,7.12,7.37,7.62,7.87,8.12) Then, as a representation of currents: AA=14 x11() par(mfrow=c(4,1)) plotSticks(x=seq(from=(1), to=(377), by=(1)), u=U1, v=V1, yscale=ysc,xlab='',ylab='',xaxt='n',yaxt='n',col=(rep('black',384))) axis(side=1) plotSticks(x=seq(from=(1), to=(377), by=(1)), u=U2, v=V2, yscale=ysc,xlab='',ylab='',xaxt='n',yaxt='n',col=(rep('black',384))) plotSticks(x=seq(from=(1), to=(377), by=(1)), u=U2, v=V2, yscale=ysc,xlab='',ylab='',xaxt='n',yaxt='n',col=(rep('black',384))) plotSticks(x=seq(from=(1), to=(377), by=(1)), u=U2, v=V2, yscale=ysc,xlab='',ylab='',xaxt='n',yaxt='n',col=(rep('black',384))) In order to simplify the representation, the three last plots are based on the same data.
Accurately measuring relative distance between a set of fiducials (Augmented reality application)
Let's say I have a set of 5 markers. I am trying to find the relative distances between each marker using an augmented reality framework such as ARToolkit. In my camera feed thee first 20 frames show me the first 2 markers only so I can work out the transformation between the 2 markers. The second 20 frames show me the 2nd and 3rd markers only and so on. The last 20 frames show me the 5th and 1st markers. I want to build up a 3D map of the marker positions of all 5 markers. My question is, knowing that there will be inaccuracies with the distances due to low quality of the video feed, how do I minimise the inaccuracies given all the information I have gathered? My naive approach would be to use the first marker as a base point, from the first 20 frames take the mean of the transformations and place the 2nd marker and so forth for the 3rd and 4th. For the 5th marker place it inbetween the 4th and 1st by placing it in the middle of the mean of the transformations between the 5th and 1st and the 4th and 5th. This approach I feel has a bias towards the first marker placement though and doesn't take into account the camera seeing more than 2 markers per frame. Ultimately I want my system to be able to work out the map of x number of markers. In any given frame up to x markers can appear and there are non-systemic errors due to the image quality. Any help regarding the correct approach to this problem would be greatly appreciated. Edit: More information regarding the problem: Lets say the realworld map is as follows: Lets say I get 100 readings for each of the transformations between the points as represented by the arrows in the image. The real values are written above the arrows. The values I obtain have some error (assumed to follow a gaussian distribution about the actual value). For instance one of the readings obtained for marker 1 to 2 could be x:9.8 y:0.09. Given I have all these readings how do I estimate the map. The result should ideally be as close to the real values as possible. My naive approach has the following problem. If the average of the transforms from 1 to 2 is slightly off the placement of 3 can be off even though the reading of 2 to 3 is very accurate. This problem is shown below: The greens are the actual values, the blacks are the calculated values. The average transform of 1 to 2 is x:10 y:2.
You can use a least-squares method, to find the transformation that gives the best fit to all your data. If all you want is the distance between the markers, this is just the average of the distances measured. Assuming that your marker positions are fixed (e.g., to a fixed rigid body), and you want their relative position, then you can simply record their positions and average them. If there is a potential for confusing one marker with another, you can track them from frame to frame, and use the continuity of each marker location between its two periods to confirm its identity. If you expect your rigid body to be moving (or if the body is not rigid, and so forth), then your problem is significantly harder. Two markers at a time is not sufficient to fix the position of a rigid body (which requires three). However, note that, at each transition, you have the location of the old marker, the new marker, and the continuous marker, at almost the same time. If you already have an expected location on the body for each of your markers, this should provide a good estimate of a rigid pose every 20 frames. In general, if your body is moving, best performance will require some kind of model for its dynamics, which should be used to track its pose over time. Given a dynamic model, you can use a Kalman filter to do the tracking; Kalman filters are well-adapted to integrating the kind of data you describe. By including the locations of your markers as part of the Kalman state vector, you may be able to be able to deduce their relative locations from purely sensor data (which appears to be your goal), rather than requiring this information a priori. If you want to be able to handle an arbitrary number of markers efficiently, you may need to come up with some clever mutation of the usual methods; your problem seems designed to avoid solution by conventional decomposition methods such as sequential Kalman filtering. Edit, as per the comments below: If your markers yield a full 3D pose (instead of just a 3D position), the additional data will make it easier to maintain accurate information about the object you are tracking. However, the recommendations above still apply: If the labeled body is fixed, use a least-squares fit of all relevant frame data. If the labeled body is moving, model its dynamics and use a Kalman filter. New points that come to mind: Trying to manage a chain of relative transformations may not be the best way to approach the problem; as you note, it is prone to accumulated error. However, it is not necessarily a bad way, either, as long as you can implement the necessary math in that framework. In particular, a least-squares fit should work perfectly well with a chain or ring of relative poses. In any case, for either a least-squares fit or for Kalman filter tracking, a good estimate of the uncertainty of your measurements will improve performance.
Rendering massive amount of data
I have a 3D floating-point matrix, in worst-case scenario the size could be (200000x1000000x100), I want to visualize this matrix using Qt/OpenGL. Since the number of elements is extremely high, I want to render them in a way that when the camera is far away from the matrix, I just show a number of interesting points that gives an approximation of how the matrix look like. When the camera gets closer, I want to get more details and hence more elements are calculated. I would like to know if there are techniques that deals with this kind of visualization.
The general idea is called level-of-detail rendering and is a whole science in itself. For your domain i would recommend two steps: 1) Reduce the number of cells by averaging (arithmetic-mean function) them in cubes of different sizes and caching those cubes (on disk as well as RAM). "Different" means here, that you have the same data in multiple sizes of cubes, e.g. coarse-grained cubes of 10000x10000x10000 and finer cubes of 100x100x100 cells resulting in multiple levels-of-detail. You have to organize these in a hierarchical structure (the larger ones containing multiple smaller ones) and for this i would recommend an Octree: http://en.wikipedia.org/wiki/Octree 2) The second step is to actually render parts of this Octree: To do this use the distance of your camera-point to the sub-cubes. Go through the cubes and decide to either enter the sub-cube or render the larger cube by using this distance-function and heuristically chosen or guessed threshold-values. (2) can be further optimized but this is optional: To optimize this rendering organize the to-be-rendered cube's into layers: The direction of the layers (whether it is in x, y, or z-slices) depends on your camera-viewpoint to which it should be near-perpendicular. Then render each slice into a texture and voila you only have to render a single quad with that texture for each slice, 1000 quads are no problem to render.
Qt has some way of rendering huge number of elements efficiently. Check the examples/demo that is part of QT.