Showing sort (order) values on axis - iccube

The graph below has its member (on the left) sorted by the percentage of blue on the total, which is a measure.
The total value is shown at the bottom, but I would like to show the percentage at the top, ie, the value of the measure used to sort.

So, I
switched the measures from columns to measures in the cube section of the widget
edit the value axis configuration and edited the general settings. (notably position right)
With the following result: (colors and orientation changed). One can see the values properly showing on the right hand side.

Related

Polygon comparison between layers

I'm trying to compare specific polygons between layers (different years), to see if there has been any change to the area. Eg[2021 & 2019]2019 & 2021 In this example (from left to right), A would decrease by the amount that is outside of the layer, as well as the amount that B is now taking instead. C would also increase in area.
The difference, symmetrical difference, union, and intersection tools all find areas outside the total bounding box of the layer, but cannot find changes to polygons within the layer.
My desired output is a change (either % or absolute) in area from each layer to a master layer (the latest year).
(Some polygons get split and/or renamed, however I suspect this is a small enough cohort that it can be addressed manually.)
Edit: Currently trying "Detect Dataset Changes", but the polygons don't exactly 100% overlap, the 0.00000001% changes in the border mess it up and mark every polygon as changed. Posted as new question: Is there a tolerance setting I can use with "detect dataset changes" tool?

How can I scale a glyph down vertically while keeping the vertical stroke widths the same (and not altering any of the horizontal dimensions)?

I'm using FontForge. I'm modifying the lower case q to make a straight-stalked 9. The q has 2 logical parts, the stalk, and the 'c'. The 'c' is too big vertically. How can I scale it down vertically while keeping the vertical stroke widths the same (and not altering any of the horizontal dimensions)?
I'm a novice with FontForge, so please spell out your explanation and provide step-by-step instructions. Thanks for your help.
It sounds like you want to decrease the x-height of the 'q' without changing stroke widths.
Font-forge provides a built-in tool to achieve this: Element > Styles > Change X-Height. You might like to experiment with this, but in practice it gives you very little control over the results and I would rarely use it.
Instead I would achieve this by directly modifying the nodes of the paths.
First, I would ensure that InterpolateCPsOnMotion is enabled. Double-click the pointer tool in your toolbox to access this setting.
This will help ensure that curves scale correctly (rather than distort) as you move control points. Now, I would select the nodes at the top and sides of the bowl of the q:
and use the down arrow key to move them down about half the distance you wish to decrease the height by. Then I would deselect the nodes at the side of the bowl:
and lower the remaining nodes the rest of the distance:
You will need to check the resulting appearance and possibly make tweaks to get it perfect. Note that this or any scaling technique can subtly distort the axis of modulated strokes, which you may wish to correct.
This technique presupposes that nodes are sensibly placed at the vertical and horizontal extrema of the bowl, and that you don't have extra nodes between these extrema. If you are not in this happy situation, you can add extrema by ctrl-shift-x and you can remove surplus nodes by selecting them and ctrl-m. If you can't remove extra nodes without significantly changing the shape of the bowl, you'll just have to modify these nodes by eye.
Another point: you say you're working from a "c". I'm not sure whether you just mean the C-shape of the bowl of the q, or whether you mean you are copying the actual glyph 'c'. Note that it is rare that the bowl of a 'q' will have exactly the same shape and weight as a 'c'. Typically the stroke will be somewhat lighter to achieve the right visual grey, and especial care will be taken where it intersects the stem. Often the two shapes will differ substantially.

WatchKit Glance group gets covered up

Setting up my interface for a Watch glance, and the top group is getting cut off by the bottom group:
Here's how it's set up in interface builder:
I'm not sure why it's getting cut off / covered up by the other group?
You can choose the upper and lower default look of your Glance, but I do not believe you can size the upper and lower sections to be a particular size.
Look into your group's constraints: Possibly select the image, then its size in the attributes inspector. It may need to be a fixed size in order to fit the labels into the upper section.
Otherwise the outer groups will not size to fit your content, they will only size large enough to fit the default space that is given for Glances in each section.

How to deal with arbitrary size for Laplacian Pyramid?

Recently I had much fun with the Laplacian Pyramid algorithm (http://persci.mit.edu/pub_pdfs/pyramid83.pdf). But one big problem is that the original paper is limited to 2^m+1*2^n+1 images. My question is: What is the best way to deal with arbitrary w*h instead? I can think of a couple of options:
Up sample the input to the next 2^m+1,2^n+1 up front
Pad even lines. How exactly? Wouldn't it shift the signal?
Shift even lines by half a sample? Wouldn't it loose half a sample?
Does anybody have experience with this? What is the most practical and efficient approach? Also any pointers to papers dealing with this would be very welcome.
One approach is to create an image with a width and height equal to the next 2^m+1,2^n+1, but instead of up-sampling the image to fill the expanded dimensions, just place it in the top-left corner and fill the empty space to the right and below with a constant value (the average value for the image is a good choice for this). Then encode in the normal way, storing the original image dimensions along with the pyramid. When decoding, decode and then crop to the original size.
This won't introduce any visual artifacts or degradation because you aren't stretching or offsetting the image in any way.
Because the empty space to the right and below the original image is a constant value, the high-pass bands at each level in the image pyramid will be all zero in this area. So if you are using a compression scheme like run length encoding to store each level this will be automatically taken care off and these areas will be compressed to almost nothing. If not then you can simply store the top-left (potentially non-zero) area of each level and then fill out the rest with zeros when decoding.
You could find the min and max x and y bounding rectangle of the non-zero values for each level and store this along with the level, cropped to include only non-zero values. The decoder could also be optimized so that areas of the image that are going to be cropped away are not actually decoded in the first place, by only processing the top-left of each level.
Here's an illustration of the technique:
Instead of just filling the lower-right area with a flat color, you could fill it with horizontally and vertically mirrored copies of the image to the right and below, and a copy mirrored in both directions to the bottom-right, like this:
This will avoid the discontinuities of the first technique, although there will be a discontinuity in dx (e.g. if the value was gradually increasing from left to right it will suddenly be decreasing). Choosing a mirror that keeps dx constant and ddx zero will avoid this second-order discontinuity by linearly extrapolating the values.
Another technique, which is similar to what some JPEG encoders do to pad out an image to a whole number of MCU blocks, is to take the last pixel value of each row and repeat it, and likewise for columns, with the bottom-right-most pixel of the image used to fill the bottom-right area:
This last technique could easily be modified to extrapolate the gradient of values or even the gradient of gradients instead of just repeating the same value for the remainder of the row or column.

Extract pixel coordinates in scilab

I have extracted edge using image processing then I selected pixel coordinate using xclick of extracted edge.Is this correct or there is need of reverse y axis coordinate?(Extracted edge is white on black background)
I want to automatically extracted pixel coordinates of extracted edge not by mouse selection.Is there is any command available in scilab?(I use canny edge detector and morphological filter to extract edge)
Please give me some suggestions
Thanks
1.) Whether to reverse the y coordinte or not, depends on the further processing. Any coordinate system can be used if you need only relative measurements and the true orientation of your features is not important (e.g. reversing top and bottom makes no difference if you simply want to count objects or droplets). Hovewer if you want to indicate your found features by plotting a dot, or a line, or a rectangle (e.g. with plot2d or xrect) or a number (e.g. with xnumb) over the image, then it's necessary to match the two coordinate sytems. I recommend this second option and to plot your result over the original image, since this is the easiest way to check your results.
2.) Automatic coordinate extraction can be made by the find function: it returns those indices of the matrix, where the expression is true.
IM=[0,0,0,1;0,0,0,1;0,1,1,1;1,1,0,0]; //edge image, edge = 1, background = 0
disp(IM,"Edge image");
[row,col]=find(IM==1); //row & column indices where IM = 1 (= edge)
disp([row',col'],"Egde coordinates (row, col)");
If your "Egde image" marks the edges not with 1 (or 255, pure white pixel) but with a relatively high number (bright pixel), then you can modify the logical expression of the find function to detect pixels with a value above a certain threshold:
[row,col]=find(IM>0.8); //if edges > a certain threshold, e.g. 0.8
EDIT: For your specific image:
Try the following code:
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\MORPHOLOGICAL_FILTERING.jpg";
//you have to modify this path!
I=imread(imagefile);
IM=imcrop(I,[170,100,950,370]); //discard the thick white border of the image
scf(0); clf(0);
ShowImage(IM,'cropped image');
threshold=100; //try different values between 0-255 (black - white)
[row,col]=find(IM>threshold);
imheight=size(IM,"r"); //image height
row=imheight-row+1; //reverse y axes coordinates (0 is at top)
plot2d(col,row,style=0); //plot over the image (zoom to see the dots)
scf(1); clf(1); //plot separate graph
plot2d(col,row,style=0);
If you play with the threshold parameter, you will see how the darker or whiter pixels are found.

Resources