compute optical flow using blender - vector

I am using blender to render a set of images with motion at different exposures. I have enabled the "vector" pass during rendering and I am aware of the blender format vector:
R: pixel displacement in X from current frame to previous frame
G: pixel displacement in Y from current frame to previous frame
B: pixel displacement in X from current frame to next frame
A: pixel displacement in Y from current frame to next frame
But I am a bit confused as to do I need to disable the other passess like "Z" and "Combined" during this process. And also if I take an image from the set, does the RGBA channels give me straight the vector information of that image with respect to the previous one or the next one.
thanks a lot,
kangkan

Related

Producing a forward/directional vector from just the euler rotation components and position

I'm currently have quite a lot of trouble producing a forward vector based on my accessible transform components. I'm working within SparkAR, where objects are built around a transform class with position, rotation (euler) and scale, as expected, however it doesn't provide any built in methods for retreiving general information such as the relative forward and up vector of the object, a transform matrix, etc.
This leaves me with the task of trying to put this together myself from the available data. The end result is simply to be able to calculate the angle between the forward vector of 2 objects, which is mapped to a 0-1 range and used as a value of texture blending. The goal is for it to simulate fake dynamic lighting through texture blends based on how similar the local object's forward vector is to the forward vector of the directional light.
I must have read 20+ different StackOverflow results by this point covering similar questions, however I've been unable to get correct results based on my testing using their suggestions, and at this point I feel like I need some more direct feedback to my method before I tear my eyes out.
My current process is as follows:
1) Retrieve euler rotation from the animated joint I want to compare against (it's in radians)
2) Convert radian values to degrees
3) Attempt to convert the euler values to a forward vector using the following calculation sample:
x = cos(yaw)cos(pitch)
y = sin(yaw)cos(pitch)
z = sin(pitch)
I tried a couple of other variations on that for different axis orders, which didn't provide much of a change at all.
Some small notes before continuing, X is pitch, Y is yaw and Z is roll. Positive Z is towards the screen, positive X is to the right, positive Y is up.
4) Normalise the vector (although it's unnecessary given the results I get).
5) Perform a dot product operation on the vector, against a set direction vector, in this case for testing purposes, I'm simply using (0,0,1).
6) Compare the resulting value against the expected result - incorrect.
Within a single 90 degree return, the value retreived from the dot product, which by my understanding should be 1 when facing the same direction, and -1 when facing inverse, oscillates between these two range endpoints between 15-20 times.
Another thing worth noting here, is that I did have to swap the Y and Z components of the calculation to produce a non-0 result, so the current forward vector calculation from euler is as follows:
x = cos(yaw)cos(pitch)
y = sin(pitch)
z = sin(yaw)cos(pitch)
I'm really not sure where to go about moving on from here to produce the result I'm looking for, as I simply don't understand what about the current calculations are going wrong to begin fixing it.
The passed in vector, pre-dot product at the rotations 0, 90, 180 and 270 are as follows:
( 1 , 0, -0.00002)
(-0.44812, 0, -0.89397)
( 1 , 0, 0.00002)
(-0.44812, 0, 0.89397)
I can see that the euler angles going into the calculations are definitely correct, so the RadToDeg conversion isn't screwing the input up.
Is the calculation I'm using for trying to produce a forward vector from euler rotations incorrect? Is there something I should be using instead?
Any advice on moving forward with this issue would be much appreciated.
Thanks.

spatstat wiht R: Error with defining the window of spatial point pattern

Please see below image. This image is created by first converting a two-column data frame into a study window (call it study_win) using as.owin, and then plotting another two-columns data-frame (call it study_points)on top of the window.
It is clear that the points are lying inside the window! However when I call
ppp(study_points[,1],study_points[,2],win = study_window)
it says that most of my points are rejected as lying outside the window. Could someone tell me what is going on?
Thanks!
First you could have taken a step back to check that the window object study_window was what you intended. You could have plotted or printed this object in its own right. A plot of study_window would show (and you can also see this in the plot that you supplied in the question) that the boundary of the window is a disconnected scatter of points, not a joined-up polygon. A printout of study_window would have revealed that it is a binary pixel mask, with a very small area, rather than a polygonal region. The help for as.owin explains that, when as.owin is applied to a dataframe containing columns of x,y coordinates, it interprets them as pixel coordinates of the pixels that lie inside the window.
So,what has happened is that as.owin has created a window consisting of one pixel at each of the (x,y) locations in the data frame. That's not what you wanted; the (x,y) coordinates were meant to be the vertices of a polygonal boundary.
To get the desired window, do something like study_window <- owin(poly=df) where df is the data frame of (x,y) coordinates of vertices.
To do it all in one step, type something like mypattern <- ppp(x, y, poly=df) where x and y are the vectors of coordinates of the points in the window.
so I solved the problem by using the "owin" and specify the region to be polygon; instead of "as.owin". I have no idea the difference between owin and as.owin, but I am just glad it worked...

Converting coordinates from a frame of reference to another frame of reference

I am trying to determine the accuracy of my object detection system so I am putting in rotated images of the original image(below) at 90 deg, 45 deg, 135 deg, 180 deg etc for the system to detect and convert the points of the detection for each rotated image with their respective frame of reference to the original image's frame of reference(in green) so that i can combine the respective detections to determine the accuracy.
Original image link:
http://i1116.photobucket.com/albums/k572/Ruihong_Zhou/37024-Tabby-cat-white-background.jpg
For example: System read in a rotated 90 deg clockwise image of the original image
http://i1116.photobucket.com/albums/k572/Ruihong_Zhou/37024-Tabby-cat-white-background2.jpg
Using the rotated image, the system detects something as indicated by the red dot. However this is with reference to the purple frame of reference. How do i convert the coordinates of the red point back to the original image's frame of reference in green for comparison?
I considered using rotation matrices for points however it seems that these matrices only work for fixed frame of reference only.

Matlab contourf() to plot data on a global map

I have been using Matlab 2011b and contourf/contourfm to plot 2D data on a map of North America. I started from the help page for contourfm on the mathworks website, and it works great if you use their default data called "geoid" and reference vector "geoidrefvec."
Here is some simple code that works with the preset data:
figure
axesm('MapProjection','lambert','maplo',[-175 -45],'mapla',[10 75]);
framem; gridm; axis off; tightmap
load geoid
%geoidrefvec=[1 90 0];
load 'TECvars.mat'
%contourfm(ITEC, geoidrefvec, -120:20:100, 'LineStyle', 'none');
contourfm(geoid, geoidrefvec, -120:20:100, 'LineStyle', 'none');
coast = load('coast');
geoshow(coast.lat, coast.long, 'Color', 'black')
whitebg('w')
title(sprintf('Total Electron Content Units x 10^1^6 m^-^2'),'Fontsize',14,'Color','black')
%axis([-3 -1 0 1.0]);
contourcbar
The problem arises when I try to use my data. I am quite sure the reference vector determines where the data should be plotted on the globe but I was not able to find any documentation about how this vector works or how to create one to work with different data.
Here is a .mat file with my data. ITEC is the matrix of values to be plotted. Information about the position of the grid relative to the earth can be found in the cell array called RT but the basic idea is. ITEC(1,1) refers to Lat=11 Long=-180 and ITEC(58,39) refers to Lat = 72.5 Long = -53 with evenly spaced data.
Does anyone know how the reference vector defines where the data is placed on the map? Or perhaps there is another way to accomplish this? Thanks in advance!
OK. So I figured it out. I realized that, given that there are only three dimensions in the vector, the degrees between latitude data must be the same as the degrees between longitude data. That is, the spacing between each horizontal data point must be the same as the spacing between each vertical point. For instance, 1 degree.
The first value in the reference vector is the distance (in degrees) between each data point (I think...this works in my case), and the two second values in the vector are the minimum latitude and minimum longitude respectively.
In my case the data was equally spaced in each direction, but not the same spacing vertically and horizontally. I simply interpolated the data to a 1x1 grid density and set the first value in the vector to 1.
Hopefully this will help someone with the same problem.
Quick question though, since I answered my own question do I get the bounty? I'd hate to loose 50 'valuable' reputation points haha

Rotating one vector to face another but slowly?

I have two vectors: heading and target. How can I turn heading to face target by some factor? Say 10% every frame or something.
community edit: The target vector is constantly changing.
Thanks!
Find the angle between the two vectors using the dot product:
heading . target = |heading|*|target|*cos(theta)
Then every frame, rotate heading by 0.10*theta using the rotation matrix.
Assuming that the only thing that matters is the direction of heading and targetHeading, we will assume all vectors are normalized. You have also said that you would like this to be true:
dheadingDegrees/dt = angle(targetHeading,heading) degrees/sec in the direction of targetHeading
(At least that's how I interpret it, as opposed to "gets closer by 10% every frame but never reaches the destination")
To get an exact answer you'd need integration and some math. If you want to simulate it and get a precise answer, you probably want to decouple this from the "frames" and simulate it maybe 100 intervals per second, depending on the required accuracy.
Thus:
every time interval dt:
target = getCurrentTarget()
rotationSpeed = angleBetween(target,currentHeading)/(1second)
heading = {rotate heading by dt*rotationSpeed radians towards target}
^-------- for how to do this, see below ----------------^
to rotate a vector v1 to v2 from time t=0 to t=1, with constant angular velocity:
v1normalized = normalized(v1)
v2perpNormalized = normalized(v2 - v2*v1normalized)
animated = cos(t*pi/2)*v1normalized + sin(t*pi/2)*v2perpNormalized

Resources