Let be Max(a,min(b,c),d)
How to determine
x-Max(a,min(b,c),d)
and is
Max{x-a,min(x-b,x-c),x-d} correct ?
No, it's not correct.
If you need to change the sign on a max/min function it'll just invert, so negating the function will and negating all of it's inputs will just change it from max to min and vise versa.
imagine it on coordination scale, changing the sign is like inverting every thing around an axis, mirror in other word, so min become max
as for your example:
x - Max(a,Min(b,c),d) = Min(x-a,Max(x-b,x-c)x-d)
Related
I am new to Google Earth Engine and have started playing with mathematically combining different bands to define new index. The problem I am having is the visualisation of the new index - I need to define the max and min parameter when adding it to the map, and I am having troubles understanding what these two end points should be. So here come my two questions:
Is it possible to get the matrix of my image in terms of pixel values? Then I could easily see from what values they range and hence could define min and max!
What values are taken in different bands? Is it from 0 to 1 and measures intensity at given wavelength, or is it something else?
Any help would be much appreciated, many thanks in advance!
Is it possible to get the matrix of my image in terms of pixel values? Then I could easily see from what values they range and hence could define min and max!
If this is what you want to do, there's a built in way to do it. Go to the layer list, click on the gear for the layer, and in the “Range” section, pick one of the “Stretch:” options from the menu, then click “Apply”. You can choose a range in standard deviations, or 100% (min and max).
You can then use the “Import” button to save these parameters as a value you can use in your script.
(All of this applies to the region of the image that's currently visible on screen — not the entire image.)
What values are taken in different bands? Is it from 0 to 1 and measures intensity at given wavelength, or is it something else?
This is entirely up to the individual dataset you are using; Earth Engine only knows about numbers stored in bands and not units of measure or spectra. There may be sufficient information in the dataset's description in the data catalog, or you may need to consult the original provider's documentation.
I have graphs of sets of points like:-
There are up to 1 million points on each graph. You can see that the points are scattered over a grid of cells, each sized 200 x 100 units. So there are 35 cells shown.
Is there an efficient way to count how many points there are in each cell? The brute force approach seems to be to parse the data 35 times with a whole load of combined is less or greater than statements.
Some of the steps below could be optimized in the sense that you could perform some of these as you build up the data set. However I'll assume you are just given a series of points and you have to find which cells they fit into. If you can inject your own code into the step that builds up the graph, you could do the stuff I wrote below along side of building the graph instead of after the fact.
You're stuck with brute force in the case of just being given the data, there's no way you can know otherwise since you have to visit each point at least once to figure out what cell it is in. Therefore we are stuck with O(n). If you have some other knowledge you could exploit, that would be up to you to utilize - but since it wasn't mentioned in the OP I will assume we're stuck with brute force.
The high level strategy would be as follows:
// 1) Set rectangle bounds to have minX/Y at +inf, and maxX/Y to be -inf
// or initialize it with the first point
// 2) For each point:
// Set the set the min with min(point.x, bounds.min.x)
// Same for the max as well
// 3) Now you have your bounds, you divide it by how many cells fit onto each
// axis while taking into account that you might need to round up with division
// truncating the results, unless you cast to float and ceil()
int cols = ceil(float(bounds.max.x - bounds.min.x) / CELL_WIDTH);
int rows = ceil(float(bounds.max.y - bounds.min.y) / CELL_HEIGHT);
// 4) You have the # of cells for the width and height, so make a 2D array of
// some sort that is w * h cells (each cell contains 32-bit int at least) and
// initialize to zero if this is C or C++
// 5) Figure out the cell number by subtracting the bottom left corner of our
// bounds (which should be the min point on the x/y axis that we found from (1))
for (Point p in points):
int col = (p.x - minX) / cellWidth;
int row = (p.y - minY) / cellHeight;
data[row][col]++;
Optimizations:
There are some ways we might be able to speed this up off the top of my head:
If you have powers of two with the cell width/height, you could do some bit shifting. If it's a multiple of ten, this might possibly speed things up if you aren't using C or C++, but I haven't profiled this so maybe hotspot in Java and the like would do this for you anyways (and no idea about Python). Then again 1 million points should be pretty fast.
We don't need to go over the whole range at the beginning, we could just keep resizing our table and adding new rows and columns if we find a bigger value. This way we'd only do one iteration over all the points instead of two.
If you don't care about the extra space usage and your numbers are positive only, you could avoid the "translate to origin" subtraction step by just assuming everything is already relative to the origin and not subtract at all. You could get away with this by modifying step (1) of the code to have the min start at 0 instead of inf (or the first point if you chose that). This might be bad however if your points are really far out on the axis and you end up creating a ton of empty slots. You'd know your data and whether this is possible or not.
There's probably a few more things that can be done but this would get you on the right track to being efficient with it. You'd be able to work back to which cell it is as well.
EDIT: This assumes you won't have some really small cell width compared to the grid size (like your width being 100 units, but your graph could span by 2 million units). If so then you'd need to look into possibly sparse matrices.
What mean each number with slashes between them (my plot) when is used the parameter extra=101, the documenttion said "Display the number of observations that fall in the node (per class for class objects; prefixed by the number of events for poisson and exp models)", but this is not clear for me.
How I can interpret them in my plot?
What mean the first number position and always represent the same? What mean the second number position and always represent the same? What mean the last number position and always represent the same?
Thanks!
You can refer to the [vignette] link for more information:
https://cran.r-project.org/web/packages/rpart.plot/rpart.plot.pdf
extra=101 displays the number and percentage of observations
in the node. Actually, it’s a weighted percentage using the weights
passed to rpart.
I am implementing a method to spatially sort DICOM slices in a volume. The way I am doing it is to sort by the position along the slice normal. This is computed as:
slice_normal = [0, 0, 0]
dir_cosines = ds[0x0020, 0x0037] # Direction cosines
slice_normal[0] = dir_cosines[1] * dir_cosines[5] - dir_cosines[2] * dir_cosines[4]
slice_normal[1] = dir_cosines[2] * dir_cosines[3] - dir_cosines[0] * dir_cosines[5]
slice_normal[2] = dir_cosines[0] * dir_cosines[4] - dir_cosines[1] * dir_cosines[3]
image_pos = ds[0x0020, 0x0032] # IPP
distance_along_normal = 0
for i in range(len(slice_normal)):
distance_along_normal += slice_normal[i] * image_pos[i]
Now this value distance_along_normal should be equal to the slice location(0x0020, 0x1041), except in my case it has the opposite sign. So the slice ordering seems to be reversed than what it should be. I would like to know ehat else must I need to take something else into account to compute the correct slice ordering.
There is no reason to expect that the sign or the value of the offset calculated by your implementation should be the same as the sign or the value of the Slice Location (0020,1041) value. As per DICOM specification
Slice Location (0020,1041) is defined as the relative position of the
image plane expressed in mm. This information is relative to an
unspecified implementation specific reference point.
Note that the direction of location is not specified at all, and it is explicitly said that the origin is arbitrary, which means that the only thing guaranteed to match between your calculation and the Slice Location is the distance (absolute value of the difference) of the positions of any 2 slices.
Also note that Slice Location is Type 3 - it does not have to be provided at all.
With regards to slice ordering - the presentation order is your decision. If you want spatial ordering, you need to decide the criteria for your ordering. For example, should axial slices be presented beginning from the head, or from the feet? That is your call completely and it will depend on the application (intended use) of your software.
If you do NOT care for the geometry, you can present images ordered for example by Instance Number (0020,0013). But there is no guarantee the ordering will have any geometrical meaning (although it usually has).
The CSS3 spec only specifies that:
The format of an HSLA color value in the functional notation is ‘hsla(’ followed by the hue in degrees, saturation and lightness as a percentage, and an , followed by ‘)’.
So am I to understand that these values would be interpreted not as integers but as floats? Example:
hsla(200.2, 90.5%, 10.2%, .2)
That would dramatically expand the otherwise small (relative to RGB) range of colors covered by HSL.
It seems to render fine in Chrome, though I don't know if they simply parse it as an INT value or what.
HSL values are converted to hexadecimal RGB values before they are handed off to the system. It's up to the device to clip any resulting RGB value that is outside the "device gamut" - the range of colors that can be displayed - to a displayable value. RGB values are denoted in Hexadecimal. This is the specified algorithm for browsers to convert HSL values to RGB values. Rounding behavior is not specified by the standard - and there are multiple ways of doing rounding since there doesn't appear to be a built-in rounding function in either C or C++.
HOW TO RETURN hsl.to.rgb(h, s, l):
SELECT:
l<=0.5: PUT l*(s+1) IN m2
ELSE: PUT l+s-l*s IN m2
PUT l*2-m2 IN m1
PUT hue.to.rgb(m1, m2, h+1/3) IN r
PUT hue.to.rgb(m1, m2, h ) IN g
PUT hue.to.rgb(m1, m2, h-1/3) IN b
RETURN (r, g, b)
From the proposed recommendation
In other words, you should be able to represent the exact same range of colors in HSLA as you can represent in RGB using fractional values for HSLA.
AFAIK, every browser casts them to INTs. Maybe. If I'm wrong you won't be able to tell the difference anyway. If it really matters, why not just go take screenshots an open them in photoshop or use an on-screen color meter. Nobody here is going to have a definitive answer without testing it, and it takes 2 minutes to test... so...
I wouldn't know exactly, but it makes sense to just put in some floating numbers and see if it works? it takes two seconds to try with a decimal, and without..