Meaning of xcos datatype dimensions - scilab

I am experiencing conflicts between xcos blocks. For example I am not able to connect a real [-2 1] output to a real [1 1] input.
Does anybody know, in general, what negative indices mean for datatype size?

This is explained # https://help.scilab.org/docs/6.1.0/en_US/scicos_model.html :
in
A vector specifying the number and size of the first dimension of regular input ports indexed from top to bottom of the block. If no input port exist in==[]. The size can be negative, equal to zero or positive :
If a size is less than zero, the compiler will try to find the appropriate size.
If a size is equal to zero, the compiler will affect this dimension by added all positive size
found in that vector
If a size is greater than zero, then the size is explicitly given.

Related

Calculating central difference

I have the following definition for calculating the gradient at a pixel using central difference:
Where h is small, f'(x)=f(x+0.5h)-f(x-0.5h)
• If we make h twice the distance between pixels
• The above equation simple states that the
derivative of the image gradient at a pixel, is the
next (right) pixel’s value minus the previous (left)
pixel’s value
Why is it not necessary to divide by h to get the rate of change? why does simply subtracting the left pixel's value from the right pixel's value give the derivative at the central pixel?
Your definition is wrong. You do need to divide by h to get a proper estimate of the derivative.
In image processing, oftentimes we see definitions for derivatives that are off by a scaling, like what you have here. In most applications, the scaling is not important, what matters is comparing values in different parts of the image, for example to find the most salient edges. For these cases it is OK to use a simplified definition (that maybe is also cheaper to compute).
For example, the Sobel operator is usually defined in a way that it produces a value 8 times larger than the derivative it tries to estimate.

How to calculate the number bracket a given number falls within a set of numbers?

I have a contiguous set of numbers up to a maximum (1...y). Im trying to find in which increment (defined by another value x) a given number (z) falls within that set.
Below is an image that best describes what I'm trying to find.
Is there a formula I can use with the available information to achieve this?
n = ceil(z*x/y);
In your example, the size of the range is divisible by the number of bins, so that all bins have equal size. If that condition does not hold then there may be some further questions about edge cases.

Handle "Division by Zero" in Image Processing (or PRNU estimation)

I have the following equation, which I try to implement. The upcoming question is not necessarily about this equation, but more generally, on how to deal with divisions by zero in image processing:
Here, I is an image, W is the difference between the image and its denoised version (so, W expresses the noise in the image), and K is an estimated fingerprint, gained from d images of the same camera. All calculations are done pixel-wise; so the equations does not involve a matrix multiplication. For more on the Idea of estimating digital fingerprints consult corresponding literature like the general wikipedia article or scientific papers.
However my problem arises when an Image has a pixel with value Zero, e.g. perfect black (let's say we only have one image, k=1, so the Zero gets not overwritten by the pixel value of the next image by chance, if the next pixelvalue is unequal Zero). Then I have a division by zero, which apparently is not defined.
How can I overcome this problem? One option I came up with was adding +1 to all pixels right before I even start the calculations. However this shifts the range of pixel values from [0|255] to [1|256], which then makes it impossible to work with data type uint8.
Other authors in papers I read on this topic, often do not consider values close the range borders. For example they only calculate the equation for pixelvalues [5|250]. They reason this, not because of the numerical problem but they say, if an image is totally saturated, or totally black, the fingerprint can not even be estimated properly in that area.
But again, my main concern is not about how this algorithm performs best, but rather in general: How to deal with divisions by 0 in image processing?
One solution is to use subtraction instead of division; however subtraction is not scale invariant it is translation invariant.
[e.g. the ratio will always be a normalized value between 0 and 1 ; and if it exceeds 1 you can reverse it; you can have the same normalization in subtraction but you need to find the max values attained by the variables]
Eventualy you will have to deal with division. Dividing a black image with itself is a proper subject - you can translate the values to some other range then transform back.
However 5/8 is not the same as 55/58. So you can take this only in a relativistic way. If you want to know the exact ratios you better stick with the original interval - and handle those as special cases. e.g if denom==0 do something with it; if num==0 and denom==0 0/0 that means we have an identity - it is exactly as if we had 1/1.
In PRNU and Fingerprint estimation, if you check the matlab implementation in Jessica Fridrich's webpage, they basically create a mask to get rid of saturated and low intensity pixels as you mentioned. Then they convert Image matrix to single(I) which makes the image 32 bit floating point. Add 1 to the image and divide.
To your general question, in image processing, I like to create mask and add one to only zero valued pixel values.
img=imread('my gray img');
a_mat=rand(size(img));
mask=uint8(img==0);
div= a_mat/(img+mask);
This will prevent division by zero error. (Not tested but it should work)

Wavelets, How can a zero padding length n signal be truncated to n coefficients from which a signal can be reconstructed in R

Wavelet transform is defined for infinite length signals. Finite length signals must be extended in some way before they can be transformed. I know that periodic replication and zero padding are appropriate for signals that begin and end on the baseline, while mirror-image replication and linear extrapolation provide continutity at the boundaries for signals that do not begin or end on the baseline. Periodic replication either wraps arround or reflects signal detail in the region beyond the boundaries and this distorts the interpretation of the transform coefficients near the boundaries.
I have a time series of limited duration not extending beyond the signal range and that is not a power of two as required by the transform (using package wavethresh, function "wst" Packet-ordered non-decimated wavelet transform). Zero padding seems to be the only way forward for signals that begin and end on the baseline also zero padding makes no assumptions of the signal after the boundaries describing only the signal, however zero padding results in non length preserving transform (one in which the transform vector is longer than the signal vector) and large perturbations in the transform space are not reflected in the signal space.
By doing zero padding (added at the end and beginning of the series) my question is how can I truncate the zero padding length signal of n coefficients to obtain n coefficients as be able to reconstruct exacly the signal in R. Have reviewed the different packages in R and have not found a way around this.

How do browsers handle rgb(percentage); for strange numbers

This is related to CSS color codes:
For hexcode we can represent 16,777,216 colors from #000000 to #FFFFFF
According to W3C Specs, Valid RGB percentages fit in a range from (0.0% to 100.0%) essentially giving you 1,003,003,001 color combinations. (1001^3)
According to the specs:
Values outside the device gamut should be clipped or mapped into the gamut when the gamut is
known: the red, green, and blue values must be changed to fall within the range supported by
the device. Users agents may perform higher quality mapping of colors from one gamut to
another. For a typical CRT monitor, whose device gamut is the same as sRGB, the four rules
below are equivalent:
I'm doubtful if browsers actually can render all these values. (but if they do please tell me and ignore the rest of this post)
Im assuming there's some mapping from rgb(percentage) to hex. (but again Im not really sure how this works)
Ideally I'd like to find out the function rgb(percentage)->HEX
If I had to guess it would probably be one of these 3.
1) Round to the nearest HEX
2) CEIL to the nearest HEX
3) FLOOR to the nearest HEX
Problem is I need to be accurate on the mapping and I have no idea where to search.
There's no way my eyes can differentiate color at that level, but maybe there's some clever way to test each of these 3.
It might also be browser dependent. Can this be tested?
EDIT:
Firefox seems to round from empirical testing.
EDIT:
I'm looking through Firefox's source code right now,
nsColor.h
// A color is a 32 bit unsigned integer with four components: R, G, B
// and A.
typedef PRUint32 nscolor;
It seems Fiefox only has room for 255 values for each R,G and B. Hinting that rounding might be the answer, but maybe somethings being done with the alpha channel.
I think I found a solution for Firefox anyways, thought you might like a follow up:
Looking through the source code I found a file:
nsCSSParser.cpp
For each rgb percentages it does the following:
It takes the percentage component multiplies it by 255.0f
Stores it in a float
Passes it into a function NSToIntRound
The result of NSToIntRound is stored into an 8 bit integer datatype,
before it is combined with the other 2 components and an alpha
channel
Looking for more detail on NSToIntRound:
nsCoord.h
inline PRInt32 NSToIntRound(float aValue)
{
return NS_lroundf(aValue);
}
NSToIntRound is a wrapper function for NS_lroundf
nsMathUtils.h
inline NS_HIDDEN_(PRInt32) NS_lroundf(float x)
{
return x >= 0.0f ? PRInt32(x + 0.5f) : PRInt32(x - 0.5f);
}
This function is actually very clever, took me a while to decipher (I don't really have a good C++ background).
Assuming x is positive
It adds 0.5f to x and then casts to an integer
If the fractional part of x was less than 0.5, adding 0.5 won't change the integer and the fractional part is truncated,
Otherwise the integer value is bumped by 1 and the fractional part is truncated.
So each component's percentage is first multiplied by 255.0f
Then Rounded and cast into a 32bit Integer
And then Cast again into an 8 bit Integer
I agree with most of you that say this appears to be a browser dependent issue, so I will do some further research on other browsers.
Thanks a bunch!
According to W3C Specs, Valid RGB percentages fit in a range from (0.0% to 100.0%) essentially giving you 1,003,003,001 color combinations. (1001^3)
No, more than that, because the precision is not limited to one decimal place. For example, this is valid syntax:
rgb(23.456% 78.90123456% 0%)
The reason for this is that, while 8 bits per component is common (hence hex codes) newer hardware supports 10 or 12 bits per component; and wider gamut colorspaces need more bits to avoid banding.
This bit-depth agnosticism is also why newer CSS color specifications use a 0 to 1 float range.
Having said which, the CSS Object Model still requires color values to be serialized at 8 bits per component. This is going to change, but the higher-precision replacement is still being discussed in the CSS working group. So for now, browsers don't let you get more than 8 bits per component of precision.
If you are converting a float or percentage form to hex (or to 0 - 255 integer) the correct method is rounding. Floor or ceiling will not spec the values evenly at the top or bottom of the range.

Resources