Related
Hey guys I am currently working with gnuplot.
I have this .csv file which I have been using to plot some formulas
(eg plot "filename.csv" u 0:day($0) = $0 ). The plots worked out; however, I was wondering if there was a way within gnuplot to save the output of my formulas as a data file too.
Please check the manual or in the gnuplot console type help table.
Code:
### save data as text
reset session
f(x) = x
g(x) = x**2
h(x) = x**3
set xrange[-5:5]
set samples 11
plot f(x) w lp, g(x) w lp, h(x) w lp
set table "myOutput.dat"
plot '+' u 1:(f($1)):(g($1)):(h($1)) w table
unset table
### end of code
Edit:
Actually, to be more flexible with data separators (e.g. comma or whatever) in the output file, you could change the plot ... w table command to something like the line below. However, I guess, gnuplot will always add a leading space " " and a trailing TAB \t for each line. But maybe this can also be changed.
plot '+' u (sprintf("%g,%g,%g,%g",$1,f($1),g($1),h($1))) w table
Result:
And myOutput.dat:
-5 -5 25 -125
-4 -4 16 -64
-3 -3 9 -27
-2 -2 4 -8
-1 -1 1 -1
0 0 0 0
1 1 1 1
2 2 4 8
3 3 9 27
4 4 16 64
5 5 25 125
Addition: (creating data in a loop)
With set print you are probably the most flexible, no leading space and trailing TAB.
Check the manual or in gnuplot console type help set print.
Code:
### save data as text, independent of range and samples
reset session
f(x) = x
g(x) = x**2
h(x) = x**3
set print "myOutput.dat"
do for [i=-5:5] {
# loop index only takes integers, multiply i with some factor if necessary
print sprintf("%g,%g,%g,%g",i,f(i),g(i),h(i))
}
set print
### end of code
I have data that comes from different sources with different typical ranges, like so:
VALUE LOWERBAND UPPERBOUND
5 2 7
6 1 10
2 1 4
22 3 8
...
I would like to normalise VALUEs with respect to LOWERBAND and UPPERBOUND, but as I have no background in statistics I really can't see how it could be done. Any pointers?
To put it in other words, I guess I would like to rescale VALUES so they would all belong to the same LOWERBAND and UPPERBOUND (perhaps the global mean LOWERBANDs and UPPERBOUNDs?)
I guess what you are after is something like the following:
Move the lower bound to zero:
newValue = oldValue - LOWERBOUND
Calculate the value as a percentage of the upper bound (scale 0 - 100)
newValuePercent = (newValue / UPPERBOUND) * 100
In you example the last value is btw. outside the [LOWERBOUND, UPPERBOUND] range... so not sure whether you want to limit it at the end or not
Hi I'm using gnuplot to plot data from a simulation structured in data blocks, like this:
CurrentTime CurrentState
0 2
1.234 2
1.990 1
2.462 0
CurrentTime CurrentState
0 2
0.895 1
1.456 2
2.052 1
3.017 0
The number of data blocks is not strictly known but is at least 30 blocks.
Notice that the number of intervals are different for each CurrentTime.
I'm using the following code to plot the data as is
# GNUPlot code
set multiplot layout 2,1 title "Insert title" font ",14"
set tmargin 3
set bmargin 3
set lmargin 5
set rmargin 2
plot "data.txt" every :1 using 1:2:(column(-2)) with linespoints lc variable
The next thing I want to plot will go in the lower plot due to the multiplot command. That plot I want to be the average of my data at intervals of time that I set. In pseudo code I want:
# pseudo code
float start, step, stop;
assign start, step, stop;
define Interval=start, by step, to stop; typed another way Interval=start:step:stop
array sum(size(number of data blocks,length(Interval), length(Interval)))
assign sum=0;
for every data block
for k=0 to length(CurrentTime)
for j=0 to length(Interval)-1
(CurrentTime(k) < Interval(j+1) && CurrentTime(k) > Interval(j-1)) ? sum += CurrentState(k) : sum += 0
average=sum/(Number of data blocks)
I am stuck trying to implement that in gnuplot. Any assistance would be awesome!
First there is the data file, some of my real data is
CurrentTime CurrentState
0 2
4.36393 1
5.76339 2
13.752 1
13.7645 2
18.2609 1
19.9713 2
33.7285 1
33.789 0
CurrentTime CurrentState
0 2
3.27887 1
3.74072 2
3.86885 1
4.97116 0
CurrentTime CurrentState
0 2
1.19854 1
3.23982 2
7.30501 1
7.83872 0
Then I used python to find the average of the data at the time I intervals I want to check the average. I chose to check at discrete time steps but they could be any time step. The following is my python code
#Loading data file: Goal is to calculate average(TimeIntervals)=averageOfTimeIntervals.
import numpy as np
data=np.genfromtxt('data.txt', comments='C')
CurrentState=data[:,1]
CurrentTime=data[:,0]
numberTimeIntervals=101
TimeIntervals=np.linspace(0,numberTimeIntervals-1,numberTimeIntervals) #gives integer values of time
stateOfTimeIntervals=np.zeros(numberTimeIntervals,dtype=np.float64)
stateOfTimeIntervals[0]=CurrentState[0] #setting initial state
#main loop
run=0
numberSimTimes=len(CurrentTime)
for j in range(0,len(stateOfTimeIntervals)): #start at 1 b/c we know initial state
for k in range(0,numberSimTimes-1):
lengthThisRun=0
if CurrentTime[k] <= TimeIntervals[j] and CurrentTime[k+1] > TimeIntervals[j]:
lengthThisRun+=1
#Goal is to get the length of this run up to the time we decide to check the state
stateOfTimeIntervals[j]+=CurrentState[k]
else:
lengthThisRun+=1
#The number of runs can be claculated using
numberRuns=len(CurrentTime) - np.count_nonzero(CurrentTime)
print "Number of Runs=%f" %(numberRuns)
#Compute the average
averageState=stateOfTimeIntervals/numberRuns
#Write to file and plot with gnuplot
np.savetxt('plot2gnu.txt',averageState)
Then using gnuplot I plotted 'plot2gnu.txt' using the following code
# to plot everything on the same plot use "multiplot"
set multiplot layout 2,1 title "Insert title" font ",14"
set tmargin 3
set bmargin 3
set lmargin 5
set rmargin 2
plot "data.txt" every :1 using 1:2:(column(-2)) with linespoints lc variable
plot 'plot2gnu.txt' using 1:2 with linespoints
I would like to point out the use of a pseudocolumn 'column(-2)' in the third column specifying line color. 'column(-2)' represents "The index number of the current data set within a file that contains multiple data sets." - From the 'old' gnuplot 4.6 documentation.
I am having trouble understanding the math to convert from object space to view space. I am doing this in hardware and I have the Atranspose matrix below:
ATranspose =
[rightx upx lookx 0]
[righty upy looky 0]
[rightz upz lookz 0]
[-eyeright -eyeup -eyelook 1]
Then to find the point we would do:
[x,y,z,1] = [x',y',z',1]*ATranspose
xnew = xold*rightx + xold*righty + xold*rightz + xold*(-eyeright)
but I am not sure if this is correct.
It could also be
[x,y,z,1]=atranspose*[x',y',z',1]T
Can someone please explain this to me? I can't find anything online about it that isn't directly opengl code related I just want to understand the math behind transforming points from object coordinates to eye coordinates.
This answer is probably much longer than it needs to be. Jump down to the bottom 2 paragraphs or so if you already understand most of the matrix math.
It might be easiest to start by looking at a 1 dimensional problem. In 1D, we have points on a line. We can scale them or we can translate them. Consider three points i,j,k and transformation matrix M.
M = [ s t ]
[ 0 1 ]
i = [1] j = [-2] k = [0]
[1] [ 1] [1]
j k i
─┴──┴──┴──┴──┴─
-2 -1 0 1 2
When we multiply by M, we get:
i' = Mi = [ s t ][ 1] = [ s+t ]
[ 0 1 ][ 1] [ 1 ]
j' = Mj = [ s t ][-2] = [-2s+t]
[ 0 1 ][ 1] [ 1 ]
k' = Mk = [ s t ][ 0] = [ t ]
[ 0 1 ][ 1] [ 1 ]
So if we assign values to s and t, then we get various transformations on our 1D 'triangle'. Scaling changes the distance between the 'points', while pure translation moves them around with respect to the origin while keeping the spacing constant:
s=1 t=0 s=2 t=1 s=1 t=2
j k i j k i j k i
─┴──┴──┴──┴──┴─ ─┴──┴──┴──┴──┴─ ─┴──┴──┴──┴──┴─
-2 -1 0 1 2 -3 -1 1 3 5 0 1 2 3 4
It's important to note that order of the transformations is critical. These 1D transformations scale and then translate. If you were to translate first, then the 'point' would be a different distance from the origin and so the scaling factor would affect it differently. For this reason, the transformations are often kept in separate matrices so that the order is clear.
If we move up to 2D, we get matrix N:
[1 0 tx][ cos(a) sin(a) 0][sx 0 0] [ sx*cos(a) sx*sin(a) tx ]
N =[0 1 ty][-sin(a) cos(a) 0][ 0 sy 0]=[-sy*sin(a) sy*cos(a) ty ]
[0 0 1 ][ 0 0 1][ 0 0 1] [ 0 0 1 ]
This matrix will 1) scale a point by sx,sy, 2) rotate the point around the origin by a degrees, and then 3 translate the point by tx,ty. Note that this matrix is constructed under the assumption that points are represented as column vectors and that the multiplication will take place as Np. As datenwolf said, if you want to use row vector representation of points but apply the same transformation, you can transpose everything and swap the order. This is a general property of matrix multiplication: (AB)^T = (B^T)(A^T).
That said, we can talk about transformations in terms of object, world, and eye coordinates. If the eye is sitting at the origin of the world, looking down the world's negative z-axis, with +x to the right and +y up and the object, a cube, is sitting 10 units down -z (centered on the z axis), with width of 2 along the world's x, depth of 3 along the z, and height of 4 along world y. Then, if the center of the cube is the object's local frame of reference and its local axes conveniently align with the world's axes. Then the vertices of the box in object coordinates are the variations on [+/-1,+/-2,+/-1.5]^T. The near, top, right (from the eye's point-of-view) vertex has object coordinates [1,2,1.5]^T, in world coordinates, the same vertex is [1,2,-8.5]^T (1.5-10=-8.5). Because of where the eye is, which way it's pointing, and the fact that we define our eye the same way as OpenGL, that vertex has the same eye coordinates as world coordinates. So let's move and rotate the eye such that the eye's x is right(rt) and the eye's y is up and the eye's -z is look(lk) and the eye is positioned at [eyeright(ex) eyeup(ey) eyelook(ez)]^T. Since we want object coordinates transformed to eye coordinates (meaning that we'll treat the eye as the origin), we'll take the inverse of these transformations and apply them to the object vertices (after they have been transformed into world coordinates). So we'll have:
ep = [WORLD_TO_EYE]*[OBJECT_TO_WORLD]*wp;
More specifically, for our vertex of interest, we'll have:
[ rt.x rt.y rt.z 0][1 0 0 -ex][1 0 0 0 ][ 1 ]
[ up.x up.y up.z 0][0 1 0 -ey][0 1 0 0 ][ 2 ]
[-lk.x -lk.y -lk.z 0][0 0 1 -ez][0 0 1 -10][1.5]
[ 0 0 0 1][0 0 0 1 ][0 0 0 1 ][ 1 ]
For convenience, I've separated out the translation the rotation of the eye affects it. Actually, now that I've written so much, this may be the point of confusion. The matrix that you gave will rotate and then translate. I assumed that the eye's translation was in world coordinates. But as you wrote it in your question, it's actually performing the translation in eye coordinates. I've also negated lk because we've defined the eye to be looking down the negative z-axis, but to make a standard rotation matrix, we want to use positive values.
Anyway, I can keep going, but maybe this answers your question already.
Continuing:
Explaining the above a little further, separating the eye's transformation into two components also makes it much easier to find the inverse. It's easy to see that if translation tx moves the eye somewhere relative to the objects in the world, we can maintain the same relative positions between the eye and points in the world by moving the everything in the world by -tx and keeping the eye stationary.
Likewise, consider the eye's orientation as defined by its default right, up, and look vectors:
[1] [0] [ 0]
d_rt=[0] d_up=[1] d_lk=[ 0]
[0] [0] [-1]
Creating a rotation matrix that points these three vectors in a new direction is easy. We just line up our three new axes rt, up, lk (as column vectors):
[rt.x up.x -lk.x 0]
[rt.y up.y -lk.y 0]
[rt.z up.z -lk.z 0]
[ 0 0 0 1]
It's easy to see that if you augment d_rt, d_up, and d_lk and multiply by the above matrix, you get the rt, up, and lk back respectively. So we've applied the transformation that we wanted. To be a proper rotation, the three vectors must be orthonormal. This is really just a change of bases. Because of that fact, we can find the inverse of this matrix quite conveniently by taking its transpose. That's what I did above. If you apply that transposed matrix to all of the points in world coordinates and leave the eye still, the points will maintain the same position, relative to the eye, as if the eye had rotated.
For Example:
Assign (in world coordinates):
[ 0] [0] [-1] [-2] [1.5]
rt=[ 0] up=[1] lk=[ 0] eye=[ 0] obj=[ 0 ]
[-1] [0] [ 0] [ 1] [-3 ]
If you transpose ATranspose in the second variant, i.e.
[x,y,z,w]^T = ATranspose^T * [x',y',z',w']^T
BTW, ^T means transpose so the original author probably meant
[x,y,z,w] = [x',y',z',w'] * A^T
and rewritten
[x,y,z,w]^T = A^T * [x',y',z',w']^T
then all these formulations are equally correct.
I think this is probably a simple maths question but I have no idea what's going on right now.
I'm capturing the positions of "markers" on a webcam and I have a list of markers and their co-ordinates. Four of the markers are the outer corners of a work surface, and the fifth (green) marker is a widget. Like this:
Here's some example data:
Top left marker (a=98, b=86)
Top right marker (c=119, d=416)
Bottom left marker (e=583, f=80)
Bottom right marker (g=569, h=409)
Widget marker (x=452, y=318)
I'd like to somehow transform the webcam's widget position into a co-ordinate to display on the screen, where top left is 0,0 not 98,86 and somehow take into account the warped angles from the webcam capture.
Where would I even begin? Any help appreciated
In order to compute the warping, you need to compute a homography between the four corners of your input rectangle and the screen.
Since your webcam polygon seems to have an arbitrary shape, a full perspective homography can be used to convert it to a rectangle. It's not that complicated, and you can solve it with a mathematical function (should be easily available) known as Singular Value Decomposition or SVD.
Background information:
For planar transformations like this, you can easily describe them with a homography, which is a 3x3 matrix H such that if any point on or in your webcam polygon, say x1 were multiplied by H, i.e. H*x1, we would get a point on the screen (rectangular), i.e. x2.
Now, note that these points are represented by their homogeneous coordinates which is nothing but adding a third coordinate (the reason for which is beyond the scope of this post). So, suppose your coordinates for X1 were, (100,100), then the homogeneous representation would be a column vector x1 = [100;100;1] (where ; represents a new row).
Ok, so now we have 8 homogeneous vectors representing 4 points on the webcam polygon and the 4 corners of your screen - this is all we need to compute a homography.
Computing the homography:
A little math:
I'm not going to get into the math, but briefly this is how we solve it:
We know that 3x3 matrix H,
H =
h11 h12 h13
h21 h22 h23
h31 h32 h33
where hij represents the element in H at the ith row and the jth column
can be used to get the new screen coordinates by x2 = H*x1. Also, the result will be something like x2 = [12;23;0.1] so to get it in the screen coordinates, we normalize it by the third element or X2 = (120,230) which is (12/0.1,23/0.1).
So this means each point in your webcam polygon (WP) can be multiplied by H (and then normalized) to get your screen coordinates (SC), i.e.
SC1 = H*WP1
SC2 = H*WP2
SC3 = H*WP3
SC4 = H*WP4
where SCi refers to the ith point in screen coordinates and
WPi means the same for the webcam polygon
Computing H: (the quick and painless explanation)
Pseudocode:
for n = 1 to 4
{
// WP_n refers to the 4th point in the webcam polygon
X = WP_n;
// SC_n refers to the nth point in the screen coordinates
// corresponding to the nth point in the webcam polygon
// For example, WP_1 and SC_1 is the top-left point for the webcam
// polygon and the screen coordinates respectively.
x = SC_n(1); y = SC_n(2);
// A is the matrix which we'll solve to get H
// A(i,:) is the ith row of A
// Here we're stacking 2 rows per point correspondence on A
// X(i) is the ith element of the vector X (the webcam polygon coordinates, e.g. (120,230)
A(2*n-1,:) = [0 0 0 -X(1) -X(2) -1 y*X(1) y*X(2) y];
A(2*n,:) = [X(1) X(2) 1 0 0 0 -x*X(1) -x*X(2) -x];
}
Once you have A, just compute svd(A) which will give decompose it into U,S,VT (such that A = USVT). The vector corresponding to the smallest singular value is H (once you reshape it into a 3x3 matrix).
With H, you can retrieve the "warped" coordinates of your widget marker location by multiplying it with H and normalizing.
Example:
In your particular example if we assume that your screen size is 800x600,
WP =
98 119 583 569
86 416 80 409
1 1 1 1
SC =
0 799 0 799
0 0 599 599
1 1 1 1
where each column corresponds to corresponding points.
Then we get:
H =
-0.0155 -1.2525 109.2306
-0.6854 0.0436 63.4222
0.0000 0.0001 -0.5692
Again, I'm not going into the math, but if we normalize H by h33, i.e. divide each element in H by -0.5692 in the example above,
H =
0.0272 2.2004 -191.9061
1.2042 -0.0766 -111.4258
-0.0000 -0.0002 1.0000
This gives us a lot of insight into the transformation.
[-191.9061;-111.4258] defines the translation of your points (in pixels)
[0.0272 2.2004;1.2042 -0.0766] defines the affine transformation (which is essentially scaling and rotation).
The last 1.0000 is so because we scaled H by it and
[-0.0000 -0.0002] denotes the projective transformation of your webcam polygon.
Also, you can check if H is accurate my multiplying SC = H*WP and normalizing each column with its last element:
SC = H*WP
0.0000 -413.6395 0 -411.8448
-0.0000 0.0000 -332.7016 -308.7547
-0.5580 -0.5177 -0.5554 -0.5155
Dividing each column, by it's last element (e.g. in column 2, -413.6395/-0.5177 and 0/-0.5177):
SC
-0.0000 799.0000 0 799.0000
0.0000 -0.0000 599.0000 599.0000
1.0000 1.0000 1.0000 1.0000
Which is the desired result.
Widget Coordinates:
Now, your widget coordinates can be transformed as well H*[452;318;1], which (after normalizing is (561.4161,440.9433).
So, this is what it would look like after warping:
As you can see, the green + represents the widget point after warping.
Notes:
There are some nice pictures in this article explaining homographies.
You can play with transformation matrices here
MATLAB Code:
WP =[
98 119 583 569
86 416 80 409
1 1 1 1
];
SC =[
0 799 0 799
0 0 599 599
1 1 1 1
];
A = zeros(8,9);
for i = 1 : 4
X = WP(:,i);
x = SC(1,i); y = SC(2,i);
A(2*i-1,:) = [0 0 0 -X(1) -X(2) -1 y*X(1) y*X(2) y];
A(2*i,:) = [X(1) X(2) 1 0 0 0 -x*X(1) -x*X(2) -x];
end
[U S V] = svd(A);
H = transpose(reshape(V(:,end),[3 3]));
H = H/H(3,3);
A
0 0 0 -98 -86 -1 0 0 0
98 86 1 0 0 0 0 0 0
0 0 0 -119 -416 -1 0 0 0
119 416 1 0 0 0 -95081 -332384 -799
0 0 0 -583 -80 -1 349217 47920 599
583 80 1 0 0 0 0 0 0
0 0 0 -569 -409 -1 340831 244991 599
569 409 1 0 0 0 -454631 -326791 -799
Due to perspective effects linear or even bilinear transformations may not be accurate enough.
Look at correct perspective mapping and more from google on this phrase, may be this is what you need...
Since your input area isn't a rectangle of the same aspect-ratio as the screen, you'll have to apply some sort of transformation to do the mapping.
What I would do is take the proportions of where the inner point is with respect to the outer sides and map that to the same proportions of the screen.
To do this, calculate the amount of the free space above, below, to the left, and to the right of the inner point and use the ratio to find out where in the screen the point should be.
alt text http://img230.imageshack.us/img230/5301/mapkg.png
Once you have the measurements, place the inner point at:
x = left / (left + right)
y = above / (above + below)
This way, no matter how skewed the webcam frame is, you can still map to the full regular rectangle on the screen.
Try the following: split the original rectangle and this figure with 2 diagonals. Their crossing is (k, l). You have 4 distorted triangles (ab-cd-kl, cd-ef-kl, ef-gh-kl, gh-ab-kl) and the point xy is in one of them.
(4 triangles are better than 2, since the distortion doesn't depend on the diagonal chosen)
You need to find in which triangle point XY is. To do that you need only 2 checks:
Check if it's in ab-cd-ef. If true, go on with ab-cd-ef, (in your case it's not, so we proceed with cd-ef-gh).
We don't check cd-ef-gh, but already check a half of it: cd-gh-kl. The point is there. (Otherwise it would have been ef-gh-kl)
Here's an excellent algorythm to check if a point is in a polygon, using only it's points.
Now you need only to map the point to the original triangle cd-gh-kl. The point xy is a linear combination of the 3 points:
x = c * a1 + g * a2 + k * (1 - a1 - a2)
y = d * a1 + h * a2 + l * (1 - a1 - a2)
a1 + a2 <= 1
2 variables (a1, a2) with 2 equations. I guess you can derive the solution formulae on your own.
Then you just make a linear combinations of a1&a2 with the corresponding points' co-ordinates in the original rectangle. In this case with W (width) and H (height) it's
X = width * a1 + width * a2 + width / 2 * (1 - a1 - a2)
Y = 0 * a1 + height * a2 + height / 2 * (1 - a1 - a2)
More of how to do this in objective-c in xcode, related to jacobs post, you can find here: calculate the V from A = USVt in objective-C with SVD from LAPACK in xcode
The "Kabcsh Algorithm" does exactly this: it creates a rotation matrix between two spaces given N matched pairs of positions.
http://en.wikipedia.org/wiki/Kabsch_algorithm