max dot size in R ggplot - r

I am trying to write a R-Script that saves a series of Maps with Dots on it. For the Map I used ggmap and geom_point for the Dots. There is a map for each day in a certain time range.
The size of the Dots depends on a certain variable, but I have a problem scaling them. I am supposed to create later an animation of all the maps changing the dots' size through time, that means that I need a global scale for the dimension of the dots, spacing from the smallest value (zero) to the biggest value of that variable (the global maximum). In the most maps the biggest value is not reached.
I tried with:
scale_size_area(max_size = max(my_variable))
because scale_size_area allows to plot very tiny dots for the 0 values. I was hoping that the so written code would scale the dots correctly, using the global maximum as maximum size, but it doesn't seem to work. Every map has still a locally biggest dot that has the same size of any biggest dot in every map. Here's an example where two points with different values have the same size:
I hope I could explain my problem. I'd be glad to hear some suggestion.

To set the maximum and minimum values in a scale (size or otherwise), use limits:
scale_size_area(limits = c(NA, max(your_varible))
NA computes the minimum

Related

Domain coloring (color wheel) plots of complex functions in Octave (Matlab)

I understand that domain or color wheel plotting is typical for complex functions.
Incredibly, I can't find a million + returns on a web search to easily allow me to reproduce some piece of art as this one in Wikipedia:
There is this online resource that reproduces plots with zeros in black - not bad at all... However, I'd like to ask for some simple annotated code in Octave to produce color plots of functions of complex numbers.
Here is an example:
I see here code to plot a complex function. However, it uses a different technique with the height representing the Re part of the image of the function, and the color representing the imaginary part:
Peter Kovesi has some fantastic color maps. He provides a MATLAB function, called colorcet, that we can use here to get the cyclic color map we need to represent the phase. Download this function before running the code below.
Let's start with creating a complex-valued test function f, where the magnitude increases from the center, and the phase is equal to the angle around the center. Much like the example you show:
% A test function
[xx,yy] = meshgrid(-128:128,-128:128);
z = xx + yy*1i;
f = z;
Next, we'll get its phase, convert it into an index into the colorcet C2 color map (which is cyclic), and finally reshape that back into the original function's shape. out here has 3 dimensions, the first two are the original dimensions, and the last one is RGB. imshow shows such a 3D matrix as a color image.
% Create a color image according to phase
cm = colorcet('C2');
phase = floor((angle(f) + pi) * ((size(cm,1)-1e-6) / (2*pi))) + 1;
out = cm(phase,:);
out = reshape(out,[size(f),3]);
The last part is to modulate the intensity of these colors using the magnitude of f. To make the discontinuities at powers of two, we take the base 2 logarithm, apply the modulo operation, and compute the power of two again. A simple multiplication with out decreases the intensity of the color where necessary:
% Compute the intensity, with discontinuities for |f|=2^n
magnitude = 0.5 * 2.^mod(log2(abs(f)),1);
out = out .* magnitude;
That last multiplication works in Octave and in the later versions of MATLAB. For older versions of MATLAB you need to use bsxfun instead:
out = bsxfun(#times,out,magnitude);
Finally, display using imshow:
% Display
imshow(out)
Note that the colors here are more muted than in your example. The colorcet color maps are perceptually uniform. That means that the same change in angle leads to the same perceptual change in color. In the example you posted, for example yellow is a very narrow, bright band. Such a band leads to false highlighting of certain features in the function, which might not be relevant at all. Perceptually uniform color maps are very important for proper interpretation of the data. Note also that this particular color map has easily-named colors (purple, blue, green, yellow) in the four cardinal directions. A purely real value is green (positive) or purple (negative), and a purely imaginary value is blue (positive) or yellow (negative).
There is also a great online tool made by Juan Carlos Ponce Campuzano for color wheel plotting.
In my experience it is much easier to use than the Octave solution. The downside is that you cannot use perceptually uniform coloring.

Calculate a dynamic iteration value when zooming into a Mandelbrot

I'm trying to figure out how to automatically adjust the maximum iteration value when moving around in the Mandelbrot fractal.
All examples I've found uses a constant of 1000 or less but that's not enough when zooming into the fractal set.
Is there a way to determine the number of max_iterations based on for example where you are in the Mandelbrot space (x_start,x_end,y_start,y_end)?
One method I tried was to repetitively pre-process a small area in the region of the Mset boundary with increasing iterations until the percentage change in status from one repetition to the next was small. The problem was, that would vary in different places on the current map, since the "depth" varies across it. How to find the right place to do it? By logging the "deepest" boundary area during the previous generation (that will still be within the next zoom area).
But my best strategy was to avoid iterating wherever possible:
Away from the boundary of the Mset, areas of equal depth can be "contoured" and then filled with that depth. It was not an easy algorithm. Basically I followed a raster scan but when I detected a boundary of iteration change (examining all the neighbours to ensure I wasn't close the the edge of the Mset), I would switch to a curve-stitching method to iterate around a contour back to where it started (obviously not recalculating spots I already did), and then make a second pass filling in the raster lines within the countour with the iteration level. It was fraught with leaks but eventually I cracked it.
Within the Mset, I followed the same approach, because the very last thing you want to do is to plough across vast areas and hit the iteration limit.
The difficult area is close the the boundary, where the iteration results can't be related to smooth contours with the neighbours. The contour stitching method won't work here, since there is only ever 1 pixel of a particular depth.
Using the contour method also will have faults to the lower or Mset sides of this region, but since this area looks chaotic until you zoom deeper, I lived with that.
So having said all that, I simply set the iteration depth as high as I can tolerate, but perhaps you can combine my first paragraph with the area-filling techniques.
BTW colouring the region adjacent to the Mset looks terrible when an animated smooth playback of the zoom is attempted. For that reason I coloured this area in a grey scale, by comparing with neighbours. If there was too much difference, I coloured to 0x808080 at first, then adapted that depending on the predominance of the neighbours' depth. All requiring fine tuning!

Colors in treemap

it would be great to clarify how colors are calculated when ploting treemap (I use gvisTreeMap function from R googleVis library).
Documentation is not very informative. What is it meant by "The color value is first recomputed on a scale from minColorValue to maxColorValue"? Usually I use treemap to display sales (size) and sales difference (color). So ideally I would like to color rectangles so that I can distinguish positive from negative growth, which as I understand is not possible at the moment.
What bothers me most right now is that "... colors are valued relative to all other nodes in the graph". Is there any way to fix colors, so that sales difference, say -25 always gets the same color.
If I have understood your problem correctly, I believe the following will solve it:
Let's say your data is percentages, so can go from 0 to 100. Set minColorValue=-100 and maxColorValue=100
(Or if using a different range, just set it so that the min value is the negative of the max value so that the average is 0.)
Then, if you set the colours to, for example, minColor='red' and maxColor='green', this should solve part 1 (negative values will be displayed in red, and positive in green)
Also, it seems that setting maxColor and minColor fixes the average value the colors are calculated from, so that this also solves part 2 (that is, -25 will then always have the same color in the graph)
Color is computed as the average color value of all child nodes of a branch. A branch with no child nodes uses the color value from the DataTable. This color value is then scaled on the minColorValue to maxColorValue scale, and a color is computed between minColor and maxColor based on the scale.
Colors are not relative to other nodes on the graph - the size of the node is relative.

Finding the image boundary

While I use R quite a bit, just started an image analysis project and I am using the EBImage package. I need to collect a lot of data from circular/elliptical images. The built-in function computeFeatures gives the maximum and minimum radius. But I need all of the radii it computes.
Here is the code. I have read the image, thresholded and filled.
actual.image = readImage("xxxx")
image = actual.image[,2070:4000]
image1 = thresh(image)
image1 = fillHull(image1)
As there are several objects in the image, I used the following to label
image1 = bwlabel(image1)
I generated features using the built in function
features = data.frame(computeFeatures(image1,image))
Now, computeFeatures gives max radius and min radius. I need all the radii of all the objects it has computed for my analysis. At least if I get the coordinates of boundaries of all objects, I can compute the radii through some other code.
I know images are stored as matrices and can come up with a convoluted way to find the boundaries and then compute radii. But, was wondering if there a more elegant method?
You could try extracting each object + some padding, and plotting the x and y axis intensity profiles for each object. The intensity profiles is simply the sum of rows / columns which can be computed using rowSums and colSums in R
Then you could find where it dropps by splitting each intensity profiles in half and computing the nearest minimum value.
Maybe an example would help clear things up:
Hopefully this makes sense

Scalling connected lines

I have some kind of a shape consisting of vertical, horizontal and diagonal lines. I have starting X,Y and ending X,Y (this is my input - just 2 points defining a line) of each line and I would like to make the whole shape scalable (just by changing the value of a scale ratio variable), so that I can still preserve the proper connection of the lines and the proportions as well. Just for getting a better idea of what I mean: it'd be as if I had the same lines in a vector editor.
Would that be possible with an algorithm, and could you please, give me another possible solution if there is no such algorithm ?
Thank you very much in advance!
what point do you want it to scale about? You could scale relative to the first point, the center, or some other arbitrary location. Typically, you subtract out an offset (for instance the first point in your input), multiply by a scale factor, and then add back the offset.
A more systematic approach in computer graphics would be to use a transformation matrix... although thats probably overkill in your case.

Resources