Plotting Count (colormap) in R - r

I have a 200 by 200 matrix in R. Each column represents a future state (1-200), each row represents a current state (1-200). Within the matrix is the likelihood of going from given current state (row) to given future state (column).
I would like to visualize this using a graph, I was suggested using
image()
function in R however the scales are highly incorrect and when I attempt to change them the entire "color map" changes too.
Ideally, id like to use color to demonstrate most likely transition (dark color) and have it gradient through to less likely transition (light color).
Could anybody point me in the fight direction?

Related

Domain coloring (color wheel) plots of complex functions in Octave (Matlab)

I understand that domain or color wheel plotting is typical for complex functions.
Incredibly, I can't find a million + returns on a web search to easily allow me to reproduce some piece of art as this one in Wikipedia:
There is this online resource that reproduces plots with zeros in black - not bad at all... However, I'd like to ask for some simple annotated code in Octave to produce color plots of functions of complex numbers.
Here is an example:
I see here code to plot a complex function. However, it uses a different technique with the height representing the Re part of the image of the function, and the color representing the imaginary part:
Peter Kovesi has some fantastic color maps. He provides a MATLAB function, called colorcet, that we can use here to get the cyclic color map we need to represent the phase. Download this function before running the code below.
Let's start with creating a complex-valued test function f, where the magnitude increases from the center, and the phase is equal to the angle around the center. Much like the example you show:
% A test function
[xx,yy] = meshgrid(-128:128,-128:128);
z = xx + yy*1i;
f = z;
Next, we'll get its phase, convert it into an index into the colorcet C2 color map (which is cyclic), and finally reshape that back into the original function's shape. out here has 3 dimensions, the first two are the original dimensions, and the last one is RGB. imshow shows such a 3D matrix as a color image.
% Create a color image according to phase
cm = colorcet('C2');
phase = floor((angle(f) + pi) * ((size(cm,1)-1e-6) / (2*pi))) + 1;
out = cm(phase,:);
out = reshape(out,[size(f),3]);
The last part is to modulate the intensity of these colors using the magnitude of f. To make the discontinuities at powers of two, we take the base 2 logarithm, apply the modulo operation, and compute the power of two again. A simple multiplication with out decreases the intensity of the color where necessary:
% Compute the intensity, with discontinuities for |f|=2^n
magnitude = 0.5 * 2.^mod(log2(abs(f)),1);
out = out .* magnitude;
That last multiplication works in Octave and in the later versions of MATLAB. For older versions of MATLAB you need to use bsxfun instead:
out = bsxfun(#times,out,magnitude);
Finally, display using imshow:
% Display
imshow(out)
Note that the colors here are more muted than in your example. The colorcet color maps are perceptually uniform. That means that the same change in angle leads to the same perceptual change in color. In the example you posted, for example yellow is a very narrow, bright band. Such a band leads to false highlighting of certain features in the function, which might not be relevant at all. Perceptually uniform color maps are very important for proper interpretation of the data. Note also that this particular color map has easily-named colors (purple, blue, green, yellow) in the four cardinal directions. A purely real value is green (positive) or purple (negative), and a purely imaginary value is blue (positive) or yellow (negative).
There is also a great online tool made by Juan Carlos Ponce Campuzano for color wheel plotting.
In my experience it is much easier to use than the Octave solution. The downside is that you cannot use perceptually uniform coloring.

Fading effect with QCPItemTracer

I have been able to use QCPItemTracer to trace a specific point on my data when plotting. How do I achieve a fade out effect? That is, as the next point is plotted, the last n points fade out slowly. Does Qt provide such a feature?
I'm not familiar with this class of QCustomPlot but it should be easy to implement what you are asking for your self. You just need to keep track of the last n points. When it comes to plotting this is often referred to as oscilloscope-type persistence.
Fade out effect is usually achieved by gradually changing either the alpha channel or the color value of the item you want to affect. The first is relatively easy but requires alpha support (QCustomPlot does support it) and decreases performance of your plotting tool. The second requires you to calculate a gradient starting with the color the point was originally plotted with and going all the way up/down to whatever background color you have selected for your plot. The gradient step can directly be derived from n.
For every n+1 point just iterate through the n points before that
For each of those points reduce the alpha or change the color
I'm presuming that fade out effect you want also needs to be distributed unequally among all points based on their age with point n (the youngest) being the least affected and point 0 (the oldest) being the most affected by the fade out effect like this (from left to right age of a point increase):

Graphics, smooth shading and normals

I'm trying to achieve smooth shading of triangles in my graphics program, however I'm currently stuck on how to do it exactly, I've got two options.
Option 1: (per vector)
Create a "zero" Vector.
Add the non-normalized normal of every incident triangle to the created vector.
Scale the resulting vector by 1 / incidentTriangleCount.
Return the normalized version of the resulting vector.
Option 2: (per vector)
Create a "zero" Vector.
Add the normalized normal of every incident triangle to the created vector.
Scale the resulting vector by 1 / incidentTriangleCount.
Return the non-normalized version of the resulting vector.
Both approaches are giving me different results and I don't really know which one to take, can anyone give me advice on this?
Always work with normalized normals. Thus your two options will merge in single one :)
Besides, you have to be careful when using "every" incident triangle, because in this case you will have your entire model smoothed, which is not good. E.g. a model of pencil that actually have edges will look like a rounded one. Implement a treshold, i.e. only consider triangles, which normals have relatively small angle beetween them.

Accurately measuring relative distance between a set of fiducials (Augmented reality application)

Let's say I have a set of 5 markers. I am trying to find the relative distances between each marker using an augmented reality framework such as ARToolkit. In my camera feed thee first 20 frames show me the first 2 markers only so I can work out the transformation between the 2 markers. The second 20 frames show me the 2nd and 3rd markers only and so on. The last 20 frames show me the 5th and 1st markers. I want to build up a 3D map of the marker positions of all 5 markers.
My question is, knowing that there will be inaccuracies with the distances due to low quality of the video feed, how do I minimise the inaccuracies given all the information I have gathered?
My naive approach would be to use the first marker as a base point, from the first 20 frames take the mean of the transformations and place the 2nd marker and so forth for the 3rd and 4th. For the 5th marker place it inbetween the 4th and 1st by placing it in the middle of the mean of the transformations between the 5th and 1st and the 4th and 5th. This approach I feel has a bias towards the first marker placement though and doesn't take into account the camera seeing more than 2 markers per frame.
Ultimately I want my system to be able to work out the map of x number of markers. In any given frame up to x markers can appear and there are non-systemic errors due to the image quality.
Any help regarding the correct approach to this problem would be greatly appreciated.
Edit:
More information regarding the problem:
Lets say the realworld map is as follows:
Lets say I get 100 readings for each of the transformations between the points as represented by the arrows in the image. The real values are written above the arrows.
The values I obtain have some error (assumed to follow a gaussian distribution about the actual value). For instance one of the readings obtained for marker 1 to 2 could be x:9.8 y:0.09. Given I have all these readings how do I estimate the map. The result should ideally be as close to the real values as possible.
My naive approach has the following problem. If the average of the transforms from 1 to 2 is slightly off the placement of 3 can be off even though the reading of 2 to 3 is very accurate. This problem is shown below:
The greens are the actual values, the blacks are the calculated values. The average transform of 1 to 2 is x:10 y:2.
You can use a least-squares method, to find the transformation that gives the best fit to all your data. If all you want is the distance between the markers, this is just the average of the distances measured.
Assuming that your marker positions are fixed (e.g., to a fixed rigid body), and you want their relative position, then you can simply record their positions and average them. If there is a potential for confusing one marker with another, you can track them from frame to frame, and use the continuity of each marker location between its two periods to confirm its identity.
If you expect your rigid body to be moving (or if the body is not rigid, and so forth), then your problem is significantly harder. Two markers at a time is not sufficient to fix the position of a rigid body (which requires three). However, note that, at each transition, you have the location of the old marker, the new marker, and the continuous marker, at almost the same time. If you already have an expected location on the body for each of your markers, this should provide a good estimate of a rigid pose every 20 frames.
In general, if your body is moving, best performance will require some kind of model for its dynamics, which should be used to track its pose over time. Given a dynamic model, you can use a Kalman filter to do the tracking; Kalman filters are well-adapted to integrating the kind of data you describe.
By including the locations of your markers as part of the Kalman state vector, you may be able to be able to deduce their relative locations from purely sensor data (which appears to be your goal), rather than requiring this information a priori. If you want to be able to handle an arbitrary number of markers efficiently, you may need to come up with some clever mutation of the usual methods; your problem seems designed to avoid solution by conventional decomposition methods such as sequential Kalman filtering.
Edit, as per the comments below:
If your markers yield a full 3D pose (instead of just a 3D position), the additional data will make it easier to maintain accurate information about the object you are tracking. However, the recommendations above still apply:
If the labeled body is fixed, use a least-squares fit of all relevant frame data.
If the labeled body is moving, model its dynamics and use a Kalman filter.
New points that come to mind:
Trying to manage a chain of relative transformations may not be the best way to approach the problem; as you note, it is prone to accumulated error. However, it is not necessarily a bad way, either, as long as you can implement the necessary math in that framework.
In particular, a least-squares fit should work perfectly well with a chain or ring of relative poses.
In any case, for either a least-squares fit or for Kalman filter tracking, a good estimate of the uncertainty of your measurements will improve performance.

How to detect a trend inside unsteady data (e.g. Trendly)?

I was wondering what kind of model / method / technique Trendly might use to achieve this model:
[It tries to find the moments where significant changes set in and ignores random movements]
Any pointers very welcome! :)
I've never seen 'Trendly', and don't know anything about it, but if I wanted to produce that red line from that blue line, in an algorithmic fashion, I would try:
Fourier the whole data set
Choose a block size longer than the period of the dominant frequency
Divide the data up into blocks of the chosen size
Compare adjacent ones with a statistical test of some sort.
Where the test says two blocks belong to the same underlying distribution, merge them.
If any were merged, go back to 4.
Red trend line is the mean of each block.
A simple "median" function could produce smoother curves over a mostly un-smooth curve.
Otherwise, a brute-force or genetic algorithm could be used; attempting to find the way to split the data into sections, so that more sections = worse solution, and less accuracy of the lines = worse solution.
Another way would be like this: Start at the beginning. As soon as the line moves outside of some radius (3 above or 3 below the first, for instance) set the new height to an average of the current line's height and the previous marker.
If you keep doing that, it would ignore small fluctuations. However, if the fluctuation was large enough, it would still effect it.

Resources